1 Introduction

Forward diffusion processes are well-suited to describe the smoothing of a given signal or image. This process of blurring implies a loss of high frequencies or details in the original data. As a result, the inverse problem, backward diffusion, suffers from deficient information which are needed to uniquely reconstruct the original data. The introduction of noise due to measured data increases this difficulty even further. Consequently, a solution to the inverse problem—if it exists at all—is highly sensitive and heavily depends on the input data: even the smallest perturbation in the initial data can have a large impact on the evolution and may cause large deviations. Therefore, it becomes clear that backward diffusion processes necessitate further stabilisation.

Previous Work on Backward Diffusion Already more than 60 years ago, John [25] discussed the quality of a numerical solution to the inverse diffusion problem given that a solution exists, and that it is bounded and non-negative. Since then, a large number of different regularisation methods have evolved which achieve stability by bounding the noise of the measured and the unperturbed data [49], by operator splitting [26], by Fourier regularisation [12], or by a modified Tikhonov regulariser [55]. Hào and Duc [22] suggest a mollification method where stability for the inverse diffusion problem follows from a convolution with the Dirichlet kernel. In [23], the same authors provide a regularisation method for backward parabolic equations with time-dependent coefficients. Ternat et al. [51] suggest low-pass filters and fourth-order regularisation terms for stabilisation.

Backward parabolic differential equations also enjoy high popularity in the image analysis community where they have, for example, been used for image restoration and image deblurring, respectively. The first contribution to backward diffusion dates back to 1955 when Kovásznay and Joseph [28] proposed to use the scaled negative Laplacian for contour enhancement. Gabor [13] observed that the isotropy of the Laplacian operator leads to amplification of accidental noise at contour lines and at the same time, it enhances the contour lines. As a remedy, he proposed to restrict the contrast enhancement to the orthogonal contour direction and— in a second step—suggested additional smoothing in tangent direction. Lindenbaum et al. [29] make use of averaged derivatives in order to improve the directional sensitive filter by Gabor. However, the authors point out that smoothing in only one direction favours the emergence of artefacts in nearly isotropic image regions. They recommend to use the Perona–Malik filter [40] instead. Forces of Perona–Malik type are also used by Pollak et al. [43] who specify a family of evolution equations to sharpen edges and suppress noise in the context of image segmentation. In [50], ter Haar Romeny et al. stress the influence of higher-order time derivatives on the Gaussian deblurred image. Referring to the heat equation, the authors express the time derivatives in the spatial domain and approximate them using Gaussian derivatives. Steiner et al. [47] highlight how backward diffusion can be used for feature enhancement in planar curves.

In the field of image processing, a frequently used stabilisation technique constrains the extrema in order to enforce a maximum–minimum principle. This is, for example, implemented in the inverse diffusion filter of Osher and Rudin [38]. It imposes zero diffusivities at extrema and applies backward diffusion everywhere else. The so-called forward-and-backward (FAB) diffusion of Gilboa et al. [18] follows a slightly different approach. Closely related to the Perona–Malik filter [40], it uses negative diffusivities for a specific range of gradient magnitudes. On the other hand, it imposes forward diffusion for values of low and zero gradient magnitude. By doing so, the filter prevents the output values from exploding at extrema. However, it is worth mentioning that—so far—all adequate implementations of inverse diffusion processes with forward or zero diffusivities at extrema require sophisticated numerical schemes. They use, for example, minmod discretisations of the Laplacian [38], nonstandard finite difference approximations of the squared gradient magnitude [53], and splittings into two-pixel interactions [54].

Another, less popular stabilisation approach implies the application of a fidelity term and has been used to penalise deviations from the input image [5, 46] or from the average grey value of the desired range [45]. Consequently, both the weights of the fidelity and the diffusion term control the range of the filtered image.

Further methods achieve stabilisation using a regularisation strategy built on FFT-based operators [6,7,8] and by the restriction to polynomials of fixed finite degree [24]. Mair et al. [31] discuss the well-posedness of deblurring Gaussian blur in the discrete image domain based on analytic number theory.

In summary, the presented methods offer an insight into the challenge of handling backward diffusion in practice and underline the demand for careful stabilisation strategies and sophisticated numerical methods.

In our paper, we are going to present an alternative approach to deal with backward diffusion problems. It prefers smarter modelling over smarter numerics. To understand it better, it is useful to recapitulate some relations between diffusion and energy minimisation.

Diffusion and Energy Minimisation For the sake of convenience, we assume a one-dimensional evolution that smoothes an initial signal \(f: [a,b] \rightarrow {\mathbb {R}}\). In this context, the original signal f serves as an initial state of the diffusion equation

$$\begin{aligned} \partial _t u = \partial _x \bigl (g(u_x^2)\,u_x\bigr ) \end{aligned}$$
(1)

where \(u=u(x,t)\) represents the filtered outcome with \(u(x,0)=f(x)\). Additionally, let \(u_x = \partial _x u\) and assume reflecting boundary conditions at \(x=a\) and \(x=b\). Given a non-negative diffusivity function g, growing diffusion times t lead to simpler representations of the input signal. From Perona and Malik’s work [40], we know that the smoothing effect at signal edges can be reduced if g is a decreasing function of the contrast \(u_x^2\). As long as the flux function \(\varPhi (u_x) := g(u_x^2)\,u_x\) is strictly increasing in \(u_x\), the corresponding forward diffusion process \(\partial _tu = \varPhi '(u_x) u_{xx}\) involves no edge sharpening. This diffusion can be regarded as the gradient descent evolution which minimises the energy

$$\begin{aligned} E[u] = \int _a^b \varPsi (u_x^2) \, \mathrm {d}x. \end{aligned}$$
(2)

The potential function \({\tilde{\varPsi }}(u_x)=\varPsi (u_x^2)\) is strictly convex in \(u_x\), increasing in \(u_x^2\), and fulfils \(\varPsi '(u_x^2)=g(u_x^2)\). Furthermore, the energy functional has a flat minimiser which is—due to the strict convexity of the energy functional—unique. The gradient descent/diffusion evolution is well-posed and converges towards this minimiser for \(t \rightarrow \infty \). Due to this classical emergence of well-posed forward diffusion from strictly convex energies, it seems natural to believe that backward diffusion processes are necessarily associated with non-convex energies. However, as we will see, this conjecture is wrong.

Our Contribution In our article, we show that a specific class of backward diffusion processes is gradient descent evolutions of energies that have one unexpected property: they are convex! Our second innovation is the incorporation of a specific constraint: we impose reflecting boundary conditions in the diffusion co-domain. This means that in case of greyscale images with an allowed grey value range of [0, 255], the occurring values are mirrored at the boundary positions 0 and 255. While such range constraints have shown their usefulness in some other context (see e.g. [34]), to our knowledge they have never been used for stabilising backward diffusions. For our novel backward diffusion models, we show also a surprising numerical fact: A simple explicit scheme turns out to be stable and convergent. Last but not least, we apply our models to the contrast enhancement of greyscale and colour images.

This article is a revised version of our conference contribution [3] which we extend in several aspects. First, we enhance our model for convex backward diffusion to support not only a global and weighted setting but also a localised variant. We analyse this extended model in terms of stability and convergence towards a unique minimiser. Furthermore, we formulate a simple explicit scheme for our newly proposed approach which shares all important properties with the time-continuous evolution. In this context, we provide a detailed discussion on the selection of suitable time step sizes. Additionally, we suggest two new applications: global contrast enhancement of digital colour images and local contrast enhancement of digital grey and colour images.

Structure of the Paper In Sect. 2, we present our model for convex backward diffusion with range constraints. We describe a general approach which allows to formulate weighted local and global evolutions. Section 3 includes proofs for model properties such as range and rank-order preservation as well as convergence analysis and the derivation of explicit steady-state solutions. Section 4 provides a simple explicit scheme which can be used to solve the occurring initial-value problem. In Sect. 5, we explain how to enhance the global and local contrast of digital images using the proposed model. Furthermore, we discuss the relation to the existing literature on contrast enhancement. In Sect. 6, we draw conclusions from our findings and give an outlook on future research.

2 Model

Let us now explore the roots of our model and derive—in a second step—the particle evolution which forms the heart of our method and which is given by the gradient descent of a convex energy.

2.1 Motivation from Swarm Dynamics

The idea behind our model goes back to the scenario of describing a one-dimensional evolution of particles within a closed system. The recent literature on mathematical swarm models employs a pairwise potential \(U: {\mathbb {R}}^n \rightarrow {\mathbb {R}}\) to characterise the behaviour of individual particles (see e.g. [9,10,11, 14, 16] and the references therein). The potential function allows to steer attractive and repulsive forces among swarm mates. Physically simplified models like [15] neglect inertia and describe the individual particle velocity \({\partial _t \textit{\textbf{v}}_i}\) within a swarm of size N directly as

$$\begin{aligned} \partial _t \textit{\textbf{v}}_i = -\sum \limits _{\begin{array}{c} \scriptstyle j = 1 \\ \scriptstyle j \ne i \end{array}}^N \varvec{\nabla } U(|\textit{\textbf{v}}_i-\textit{\textbf{v}}_j|), \qquad i = 1,\ldots ,N, \end{aligned}$$
(3)

where \(\textit{\textbf{v}}_i\) and \(\textit{\textbf{v}}_j\) denote particle positions in \({\mathbb {R}}^n\). These models are also referred to as first-order models. Often they are inspired by biology and describe long-range attractive and short-range repulsive behaviour between swarm members. The interplay of attractive and repulsive forces leads to flocking and allows to gain stability for the whole swarm. Inverting this behaviour—resulting in short-range attractive and long-range repulsive forces—leads to a highly unstable scenario in which the swarm splits up into small separating groups which might never reach a point where they stand still. One would expect that a restriction to repulsive forces only will increase this instability even further. However, we will present a model which copes well with exactly this situation. In our set-up, every particle moves within the open interval (0, 1) and has an interaction radius of size 1. Keeping this in mind, let us briefly examine the two main assumptions of the evolution. First, there exist reflections for all particles at the left and right domain boundary. Second, the particles repel each other and— furthermore—get repelled by the reflections. However, due to the limited viewing range, only one of the two reflections of a certain particle is considered at any given time, namely the one which is closer to the reference particle (see Fig. 1). A special case occurs if the reference particle is located at position 0.5: the repulsive forces of both of its own reflections equal out.

Fig. 1
figure 1

Four particles with positions in (0, 1) and their reflections at the left and right domain boundary (labelled \(_l\) and \(_r\) accordingly). Particle 2, for example, gets repelled by the particles \(1,3,4,1_l,2_r,3_r,4_r\)

2.2 Discrete Variational Model

We propose a dynamical system which has its roots in a spatial discretisation of the energy functional (2). Furthermore, we make use of a decreasing energy function \(\varPsi : {\mathbb {R}}_0^+ \rightarrow {\mathbb {R}}\) and a global range constraint on u. The corresponding flux function \(\varPhi \) is defined as \(\varPhi (s) := \varPsi '(s^2) s\).

Our goal is to describe the evolution of one-dimensional—not necessarily distinct—particle positions \(v_i \in (0,1)\), where \(i = 1,\ldots ,N\). Therefore, we extend the position vector \(\textit{\textbf{v}} = (v_1,\ldots ,v_N)^{\mathrm {T}}\) with the additional coordinates \(v_{N+1},\ldots ,v_{2N}\) defined as \(v_{2N+1-i} := 2 - v_i \in (1,2)\). This extended position vector \(\textit{\textbf{v}} \in (0,2)^{2N}\) allows to evaluate the energy function

$$\begin{aligned} E(\textit{\textbf{v}},\textit{\textbf{W}}) = \frac{1}{4} \cdot \sum \limits _{i = 1}^{2N} \sum \limits _{j = 1}^{2N} w_{i,j} \cdot \varPsi ( (v_j - v_i)^2 ), \end{aligned}$$
(4)

which models the repulsion potential between all positions \(v_i\) and \(v_j\). The coefficient \(w_{i,j}\) denotes entry j in row i of a constant non-negative weight matrix \(\textit{\textbf{W}} = (w_{i,j}) \in ({\mathbb {R}}_0^+)^{2N \times 2N}\). It models the importance of the interaction between position \(v_i\) and \(v_j\). All diagonal elements of the weight matrix are positive, i.e. \(w_{i,i} > 0, \forall i \in \{1,2,\ldots ,2N\}\). In addition, we assume that the weights for all extended positions are the same as those for the original ones. Namely, we have

$$\begin{aligned} w_{i,j} = w_{i, \, 2N+1-j} = w_{2N+1-i, \, j} = w_{2N+1-i, \, 2N+1-j} \end{aligned}$$
(5)

for \(1 \le i,j \le N\).

Table 1 One exemplary class of penaliser functions \(\varPsi (s^2)\) for \(s \in [0,1]\) with \(n \in {\mathbb {N}}\), \(a > 0\) and corresponding diffusivity \(\varPsi '(s^2)\) and flux \(\varPhi (s)\) functions
Fig. 2
figure 2

Top: Exemplary penaliser functions \({\tilde{\varPsi }}_{1,1}\), \({\tilde{\varPsi }}_{1,2}\), and \({\tilde{\varPsi }}_{1,3}\) extended to the interval \([-1,3]\) by imposing symmetry and periodicity with \({\tilde{\varPsi }}_{a,n}(s) := \varPsi _{a,n}(s^2)\). Middle: Corresponding diffusivities \({\tilde{\varPsi }}'_{1,1}\), \({\tilde{\varPsi }}'_{1,2}\), and \({\tilde{\varPsi }}'_{1,3}\) with \({\tilde{\varPsi }}'_{a,n}(s) := \varPsi '_{a,n}(s^2)\). Bottom: Corresponding flux functions \(\varPhi _{1,1}\), \(\varPhi _{1,2}\), and \(\varPhi _{1,3}\) with \(\varPhi _{a,n}(s)=\varPsi '_{a,n}(s^2)s\)

For the penaliser function \(\varPsi \), we impose several restrictions which we discuss subsequently. Table 1 shows one reasonable class of functions \(\varPsi _{a,n}\) as well as the corresponding diffusivities \(\varPsi '_{a,n}\) and flux functions \(\varPhi _{a,n}\). In Fig. 2, we provide an illustration of three functions using \(a = 1\) and \(n = 1,2,3\). The penaliser is constructed following a three-step procedure. First, the function \(\varPsi (s^2)\) is defined as a continuously differentiable, decreasing, and strictly convex function for \(s \in [0,1]\) with \(\varPsi (0)=0\) and \(\varPhi _-(1)=0\) (left-sided derivative). Next, \(\varPsi \) is extended to \([-1,1]\) by symmetry and to \({\mathbb {R}}\) by periodicity \(\varPsi \bigl ((2+s)^2\bigr )=\varPsi (s^2)\). This results in a penaliser \(\varPsi (s^2)\) which is continuously differentiable everywhere except at even integers, where it is still continuous. Note that \(\varPsi (s^2)\) is increasing on \([-1,0]\) and [1, 2]. The flux \(\varPhi \) is continuous and increasing in (0, 2) with jump discontinuities at 0 and 2 (see Fig. 2). Furthermore, we have that \(\varPhi (s)=-\varPhi (-s)\) and \(\varPhi (2+s)=\varPhi (s)\). Exploiting the properties of \(\varPsi \) allows us to rewrite (4) without the redundant entries \(v_{N+1},\ldots ,v_{2N}\) as

$$\begin{aligned} \begin{aligned} E(\textit{\textbf{v}},\textit{\textbf{W}}) =&\frac{1}{2} \cdot \sum \limits _{i = 1}^N \sum \limits _{j = 1}^N w_{i,j} \cdot \bigg ( \varPsi ((v_j - v_i)^2) \, \\&+\varPsi ((v_j + v_i)^2) \bigg ). \end{aligned} \end{aligned}$$
(6)

A gradient descent for (4) is given by

$$\begin{aligned} \begin{aligned} \partial _t v_i&= - \partial _{v_i} E(\textit{\textbf{v}},\textit{\textbf{W}})\\&= \sum \limits _{j \in J_1^i} w_{i,j} \cdot \varPhi (v_j - v_i), \quad i = 1,\ldots ,2N, \end{aligned} \end{aligned}$$
(7)

where \(v_i\) now are functions of the time t and

$$\begin{aligned} J_1^i := \{ j \in \{1,2,\ldots ,2N\} \, \vert \, v_j \ne v_i \}. \end{aligned}$$
(8)

Note that for \(1 \le i,j \le N\), thus \(|v_j-v_i|<1\), the flux \(\varPhi (v_j-v_i)\) is negative for \(v_j > v_i\) and positive otherwise, thus driving \(v_i\) always away from \(v_j\). This implies that we have negative diffusivities \(\varPsi '\) for all \(|v_j-v_i| < 1\). Due to the convexity of \(\varPsi (s^2)\), the absolute values of the repulsive forces \(\varPhi \) are decreasing with the distance between \(v_i\) and \(v_j\). We remark that the jumps of \(\varPhi \) at 0 and 2 are not problematic here, as all positions \(v_i\) and \(v_j\) in the argument of \(\varPhi \) are distinct by the definition of \(J_1^i\).

Let us discuss shortly how the interval constraint for the \(v_i\), \(i=1,\ldots ,N\), is enforced in (4) and (7). First, notice that \(v_{2N+1-i}\) for \(i=1,\ldots ,N\) is the reflection of \(v_i\) on the right interval boundary 1. For \(v_i\) and \(v_{2N+1-j}\) with \(1\le i,j\le N\) and \(v_{2N+1-j}-v_i<1\), there is a repulsive force due to \(\varPhi (v_{2N+1-j}-v_i)<0\) that drives \(v_i\) and \(v_{2N+1-j}\) away from the right interval boundary. The closer \(v_i\) and \(v_{2N+1-j}\) come to this boundary, the stronger is the repulsion. For \(v_{2N+1-j}-v_i>1\), we have \(\varPhi (v_{2N+1-j}-v_i)>0\). By \(\varPhi (v_{2N+1-j}-v_i) = \varPhi \bigl ((2-v_j)-v_i\bigr ) = \varPhi \bigl ((-v_j)-v_i\bigr )\), this can equally be interpreted as a repulsion between \(v_i\) and \(-v_j\) where \(-v_j\) is the reflection of \(v_j\) at the left interval boundary 0. In this case, the interaction between \(v_i\) and \(v_{2N+1-j}\) drives \(v_i\) and \(-v_j\) away from the left interval boundary. Recapitulating both possible cases, it becomes clear that every \(v_i\) is either repelled from the reflection of \(v_j\) at the left or at the right interval boundary, but never from both at the same time. As \(\partial _tv_{2N+1-i}=-\partial _tv_i\) holds in (7), the symmetry of \(\textit{\textbf{v}}\) is preserved. Dropping the redundant entries \(v_{N+1},\ldots ,v_{2N}\), Equation (7) can be rewritten as

$$\begin{aligned} \partial _t v_i = \sum \limits _{j \in J_2^i} w_{i,j} \cdot \varPhi (v_j - v_i) - \sum \limits _{j = 1}^N w_{i,j} \cdot \varPhi (v_j + v_i), \end{aligned}$$
(9)

for \(i = 1,\ldots ,N\), where the second sum expresses the repulsions between original and reflected coordinates in a symmetric way. The set \(J_2^i\) is defined as

$$\begin{aligned} J_2^i := \{ j \in \{1,2,\ldots ,N\} \, \vert \, v_j \ne v_i \}. \end{aligned}$$
(10)

Equation (9) denotes a formulation for pure repulsion among N different positions \(v_i\) with stabilisation being achieved through the consideration of their reflections at the domain boundary. It is worth mentioning that within (6) and (9), we only make use of the first \(N \times N\) entries of \(\textit{\textbf{W}}\). In the following, we denote this submatrix by \(\varvec{{\tilde{W}}}\) and refer to its elements as \({\tilde{w}}_{i,j}\). Given an initial vector \({\textit{\textbf{f}}}\in (0,1)^N\) and initialising \(v_i(0)=f_i\), \(v_{2N+1-i}(0)=2-f_i\) for \(i=1,\ldots ,N\), the gradient descent (7) and (9) evolves \({\textit{\textbf{v}}}\) towards a minimiser of E.

3 Theory

Below we provide a detailed analysis of the evolution and discuss its main properties. For this purpose, we consider the Hessian of (6) whose entries for \(1 \le i \le N\) read

$$\begin{aligned} \partial _{v_iv_i} E(\textit{\textbf{v}},\varvec{{\tilde{W}}})= & {} \sum \limits _{{j \in J_2^i}} {\tilde{w}}_{i,j} \cdot \varPhi '(v_j-v_i)\nonumber \\&+ \sum \limits _{j = 1}^N {\tilde{w}}_{i,j} \cdot \varPhi '(v_j+v_i), \end{aligned}$$
(11)
$$\begin{aligned} \partial _{v_iv_j} E(\textit{\textbf{v}},\varvec{{\tilde{W}}})= & {} {\left\{ \begin{array}{ll} \begin{aligned} {\tilde{w}}_{i,j} \cdot \big ( &{} \varPhi '(v_j+v_i) \, -\\ &{} \varPhi '(v_j-v_i) \big ), \end{aligned} &{} \forall j \in J_2^i,\\ \, {\tilde{w}}_{i,j} \cdot \varPhi '(v_j+v_i), &{} \forall j \in J_3^i, \end{array}\right. } \end{aligned}$$
(12)

where

$$\begin{aligned} J_3^i := \{ j \in \{1,2,\ldots ,N\} \, \vert \, v_i = v_j \}. \end{aligned}$$
(13)

3.1 General Results

In a first step, let us investigate the well-posedness of the underlying initial value problem in the sense of Hadamard [21].

Theorem 1

(Well-Posedness) Let \(\varPsi = \varPsi _{a,n}\) as defined in Table 1. Then, the initial value problem (9) is well-posed since

  1. (a)

    it has a solution,

  2. (b)

    the solution is unique, and

  3. (c)

    it depends continuously on the initial conditions.

Proof

The initial value problem (9) can be written as

$$\begin{aligned} \varvec{{\dot{v}}}(t)&= \textit{\textbf{f}}(\textit{\textbf{v}}(t)) := -\varvec{\nabla }_{\textit{\textbf{v}}} E(\textit{\textbf{v}}(t),\textit{\textbf{W}}) \end{aligned}$$
(14)
$$\begin{aligned} \textit{\textbf{v}}(0)&= \textit{\textbf{v}}_0 \end{aligned}$$
(15)

with \(\textit{\textbf{v}}(t) \in {\mathbb {R}}^{2N}\) and \(t \in {\mathbb {R}}_0^+\) where we make use of the fact that \(\textit{\textbf{W}}\) is a constant weight matrix.

In case \(\textit{\textbf{f}}(\textit{\textbf{v}}(t))\) is continuously differentiable and Lipschitz continuous, all three conditions (a)–(c) hold. Existence and uniqueness directly follow from [39, chapter 3.1, Theorem 3]. Continuous dependence on the initial conditions is guaranteed due to [39, chapter 2.3, Theorem 1] which is based on Gronwall’s Lemma [20]. Thus, let us now prove differentiability and Lipschitz continuity of \(\textit{\textbf{f}}(\textit{\textbf{v}}(t))\).

Differentiability Differentiability follows from the fact that all functions \(\varPhi _{a,n}\) are continuously differentiable. As a consequence, the partial derivatives of (8) w.r.t. \(v_i\) exist for \(i = 1,\ldots ,2N\).

Lipschitz Continuity The Gershgorin circle theorem [17] allows to estimate a valid Lipschitz constant L as an upper bound of the spectral radius of the Jacobian of (9). For \(1 \le i \le N\) the entries read

$$\begin{aligned} \partial _{v_i}(\partial _t v_i) =&-\sum \limits _{j \in J_2^i} {\tilde{w}}_{i,j} \cdot \big ( \varPhi '(v_j-v_i) + \varPhi '(v_j+v_i) \big )\nonumber \\&- 2 \cdot \sum \limits _{j \in J_3^i} {\tilde{w}}_{i,j} \cdot \varPhi '(v_j + v_i), \end{aligned}$$
(16)
$$\begin{aligned} \partial _{v_j}(\partial _t v_i) =&{\left\{ \begin{array}{ll} \begin{aligned} {\tilde{w}}_{i,j} \cdot \big ( &{} \varPhi '(v_j-v_i) -\\ &{} \varPhi '(v_j+v_i) \big ), \end{aligned} &{} \forall j \in J_2^i,\\ -{\tilde{w}}_{i,j} \cdot \varPhi '(v_j + v_i), &{} \forall j \in J_3^i. \end{array}\right. } \end{aligned}$$
(17)

The radii of the Gershgorin discs fulfil

$$\begin{aligned} r_i= & {} \sum \limits _{\begin{array}{c} j = 1\\ j \ne i \end{array}}^N \big | \partial _{v_j} (\partial _t v_i) \big |\nonumber \\= & {} \sum \limits _{j \in J_2^i} {\tilde{w}}_{i,j} \cdot | \varPhi '(v_j - v_i) - \varPhi '(v_j + v_i) |\nonumber \\&+ \sum \limits _{\begin{array}{c} j \in J_3^i\\ j \ne i \end{array}} {\tilde{w}}_{i,j} \cdot |\varPhi '(v_j+v_i)| \nonumber \\< & {} \sum \limits _{j \in J_2^i} {\tilde{w}}_{i,j} \cdot | \varPhi '(v_j - v_i) - \varPhi '(v_j + v_i) |\nonumber \\&+ \sum \limits _{j \in J_3^i} {\tilde{w}}_{i,j} \cdot |\varPhi '(v_j+v_i)| \nonumber \\=: & {} {\tilde{r}}_i, \qquad i = 1,\ldots ,N. \end{aligned}$$
(18)

Then, we have \(|\lambda _i - \partial _{v_i}(\partial _t v_i)| < {\tilde{r}}_i\) for \(1 \le i \le N\) where \(\lambda _i\) denotes the ith eigenvalue of the Jacobian of (9). This leads to the bounds

$$\begin{aligned} \lambda _i<&\sum \limits _{j \in J_2^i} {\tilde{w}}_{i,j} \cdot \big ( |\varPhi '(v_j-v_i)-\varPhi '(v_j+v_i)| \, \nonumber \\&-( \varPhi '(v_j-v_i) + \varPhi '(v_j+v_i)) \big ) \nonumber \\&+ \sum \limits _{j \in J_3^i} {\tilde{w}}_{i,j} \cdot \big ( |\varPhi '(v_j+v_i)| - 2 \cdot \varPhi '(v_j+v_i) \big ) \nonumber \\ \le&\sum \limits _{j \in J_2^i} {\tilde{w}}_{i,j} \cdot \big ( |\varPhi '(v_j-v_i)|+|\varPhi '(v_j+v_i)| \, \nonumber \\&+ |\varPhi '(v_j-v_i)| + |\varPhi '(v_j+v_i)| \big ) \nonumber \\&+ \sum \limits _{j \in J_3^i} {\tilde{w}}_{i,j} \cdot \big ( |\varPhi '(v_j+v_i)| + 2 \cdot |\varPhi '(v_j+v_i)| \big ) \nonumber \\ \le&\, 4 \cdot L_\varPhi \cdot \sum \limits _{j \in J_2^i} {\tilde{w}}_{i,j} + 3 \cdot L_\varPhi \cdot \sum \limits _{j \in J_3^i} {\tilde{w}}_{i,j}\nonumber \\ <&\, 4 \cdot L_\varPhi \cdot \sum \limits _{j = 1}^N {\tilde{w}}_{i,j}, \qquad i = 1,\ldots ,N, \end{aligned}$$
(19)

where \(L_{\varPhi }\) represents the Lipschitz constant of the flux function \(\varPhi \). Using the same reasoning, one can show that

$$\begin{aligned} \lambda _i > -4 \cdot L_\varPhi \cdot \sum \limits _{j= 1}^N {\tilde{w}}_{i,j}, \qquad i = 1,\ldots ,N. \end{aligned}$$
(20)

Consequently, an upper bound for the spectral radius—and thus for the Lipschitz constant L of the gradient of (9)—reads

$$\begin{aligned} L \le \max \limits _{1 \le i \le N}|\lambda _i| < 4 \cdot L_\varPhi \cdot \max \limits _{1 \le i \le N} \sum \limits _{j = 1}^N {\tilde{w}}_{i,j} =: L_{\mathrm {max}}. \end{aligned}$$
(21)

For our specific class of flux functions \(\varPhi _{a,n}\), a valid Lipschitz constant \(L_{\varPhi }\) is given by

$$\begin{aligned} L_{\varPhi } = a \cdot n \cdot (2n - 1) \cdot 2^{2n-2} \end{aligned}$$
(22)

such that we have

$$\begin{aligned} L < 4 \cdot a \cdot n \cdot (2n-1) \cdot 2^{2n-2} \cdot \max \limits _{1 \le i \le N} \sum \limits _{j = 1}^N {\tilde{w}}_{i,j}. \end{aligned}$$
(23)

This concludes the proof. \(\square \)

Next, let us show that no position \(v_i\) can ever reach or cross the interval boundaries 0 and 1.

Theorem 2

(Avoidance of Range Interval Boundaries) For any weighting matrix \(\varvec{{\tilde{W}}} \in ({\mathbb {R}}_0^+)^{N \times N}\), all N positions \(v_i\) which evolve according to (9) and have an arbitrary initial value in (0, 1) do not reach the domain boundaries 0 and 1 for any time \(t \ge 0\).

Proof

Equation (9) can be written as

$$\begin{aligned} \begin{aligned} \partial _tv_i =&\sum \limits _{j \in J_2^i} {\tilde{w}}_{i,j} \cdot \bigg ( \varPhi (v_j-v_i) - \varPhi (v_j+v_i) \bigg ) \\&- \sum \limits _{j \in J_3^i} {\tilde{w}}_{i,j} \cdot \varPhi (2v_i), \end{aligned} \end{aligned}$$
(24)

where \(1 \le i \le N\). Notice that for \(j \in J_2^i\), we have

$$\begin{aligned} \lim \limits _{v_i \rightarrow 0^+} \varPhi (v_j-v_i) - \varPhi (v_j+v_i)= & {} 0, \end{aligned}$$
(25)
$$\begin{aligned} \lim \limits _{v_i \rightarrow 1^-} \varPhi (v_j-v_i) - \varPhi (v_j+v_i)= & {} 0, \end{aligned}$$
(26)

where the latter follows from the periodicity of \(\varPhi \). Consequently, any position \(v_i\) which gets arbitrarily close to one of the domain boundaries 0 or 1 experiences no impact by positions \(v_j\) with \(j \in J_2^i\), and the first sum in (24) gets zero. The definition of \(\varPsi (s^2)\) implies that

$$\begin{aligned} \varPsi '(s^2) < 0,&\quad \forall s \in (0,1), \end{aligned}$$
(27)
$$\begin{aligned} \varPsi '(s^2) > 0,&\quad \forall s \in (1,2), \end{aligned}$$
(28)

from which it follows for \(1 \le i \le N\) that

$$\begin{aligned} - \varPhi (2v_i) > 0,&\quad \forall v_i \in \left( 0,\frac{1}{2}\right) , \end{aligned}$$
(29)
$$\begin{aligned} - \varPhi (2v_i) < 0,&\quad \forall v_i \in \left( \frac{1}{2},1\right) . \end{aligned}$$
(30)

Now remember that \(\varvec{{\tilde{W}}} \in ({\mathbb {R}}_0^+)^{N \times N}\) and \({\tilde{w}}_{i,i} > 0\). In combination with (29) and (30), we get

$$\begin{aligned} \lim \limits _{v_i \rightarrow 0^+} \partial _t v_i > 0 \quad \text{ and } \quad \lim \limits _{v_i \rightarrow 1^-} \partial _t v_i < 0, \end{aligned}$$
(31)

which concludes the proof.\(\square \)

Let us for a moment assume that the penaliser function is given by \(\varPsi = \varPsi _{a,n}\) from Table 1. Below, we prove that this implies convergence to the global minimum of the energy \(E(\textit{\textbf{v}},\varvec{{\tilde{W}}})\).

Theorem 3

(Convergence for \(\varPsi = \varPsi _{a,n=1}\)) For \(t \rightarrow \infty \), given a penaliser \(\varPsi _{a,1}\) with arbitrary \(a > 0\), any initial configuration \(\textit{\textbf{v}} \in (0,1)^N\) converges to a unique steady state \(\textit{\textbf{v}}^*\) which is the global minimiser of the energy given in (6).

Proof

As a sum of convex functions, (6) is convex. Therefore, the function \(V(\textit{\textbf{v}},\varvec{{\tilde{W}}}) := E(\textit{\textbf{v}},\varvec{{\tilde{W}}}) - E(\textit{\textbf{v}}^*,\varvec{{\tilde{W}}})\) (where \(\textit{\textbf{v}}^*\) is the equilibrium point) is a Lyapunov function with \(V(\textit{\textbf{v}}^*,\varvec{{\tilde{W}}}) = 0\) and \(V(\textit{\textbf{v}},\varvec{{\tilde{W}}}) > 0\) for all \(\textit{\textbf{v}} \ne \textit{\textbf{v}}^*\). Furthermore, we have

$$\begin{aligned} \partial _t V(\textit{\textbf{v}},\varvec{{\tilde{W}}}) = -\sum \limits _{i=1}^{N} \big ( \partial _{v_i} E(\textit{\textbf{v}},\varvec{{\tilde{W}}}) \big )^2 \le 0 . \end{aligned}$$
(32)

According to Gershgorin’s theorem [17], one can show that the Hessian matrix of (6) is positive definite for \(\varPsi = \varPsi _{a,1}\) from which it follows that \(E(\textit{\textbf{v}},\varvec{{\tilde{W}}})\) has a strict (global) minimum. This implies that the inequality in (32) becomes strict except in case of \(\textit{\textbf{v}} = \textit{\textbf{v}}^*\) and guarantees asymptotic Lyapunov stability [30] of \(\textit{\textbf{v}}^*\). Thus, we have convergence to \(\textit{\textbf{v}}^*\) for \(t \rightarrow \infty \).\(\square \)

Remark 1

Theorem 3 can be extended to the case of \(n=2\) and—in a weaker formulation—to arbitrary \(n \in {\mathbb {N}}\). The proofs for both cases are based on a straightforward application of the Gershgorin circle theorem. For details, we refer to the supplementary material.

  1. (a)

    Given that \(\varPsi = \varPsi _{a,n=2}\), let us assume that one of the following two conditions

    • \(v_i \ne \frac{1}{2}\), or

    • there exists \(j \in J_2^i\) for which \(v_j \ne 1 - v_i\) and \({\tilde{w}}_{i,j} > 0\),

    is fulfilled for every \(i \in [1,N]\) and \(t \ge 0\). Then, the Hessian matrix of (6) is positive definite and convergence to the strict global minimum of \(E(\textit{\textbf{v}},\varvec{{\tilde{W}}})\) follows.

  2. (b)

    For all penaliser functions \(\varPsi = \varPsi _{a,n}\), one can show that the Hessian matrix of (6) is positive semi-definite. This means that our method converges to a global minimum of \(E(\textit{\textbf{v}},\varvec{{\tilde{W}}})\). However, this minimum does not have to be unique.

In general, the steady-state solution of (9) depends on the definition of the penaliser function \(\varPsi \). Based on (24), and assuming that \(\varPsi = \varPsi _{a,n}\), a minimiser of \(E(\textit{\textbf{v}},\varvec{{\tilde{W}}})\) necessarily fulfils the equation

$$\begin{aligned} 0 =&\sum \limits _{j \in J_2^i} {\tilde{w}}_{i,j} \cdot \big ( (v_j^*-v_i^*-1)^{2n-1} - (v_j^*+v_i^*-1)^{2n-1} \big )\nonumber \\&- \sum \limits _{j \in J_3^i} {\tilde{w}}_{i,j} \cdot (2v_i^*-1)^{2n-1}, \end{aligned}$$
(33)

where \(i = 1,\ldots ,N\).

3.2 Global Model

If all positions \(v_i\) interact with each other during the evolution, i.e. \({\tilde{w}}_{i,j} > 0\) for \(1 \le i,j \le N\), we speak of our model as acting globally. Below, we prove the existence of weight matrices \(\varvec{{\tilde{W}}}\) for which distinct positions \(v_i\) and \(v_j\) (with \(i \ne j\)) can never become equal (assuming that the positions \(v_i\), \(i = 1, \ldots , N\), are distinct for \(t = 0\)). This implies that the initial rank order of \(v_i\) is preserved throughout the evolution.

Theorem 4

(Distinctness of \(v_i\) and \(v_j\)) Among N initially distinct positions \(v_i \in (0,1)\) evolving according to (9), no two ever become equal if \({\tilde{w}}_{j,k} = {\tilde{w}}_{i,k} > 0\) for \(1 \le i,j,k \le N, \, i \ne j\).

Proof

Given N distinct positions \(v_i \in (0,1)\), equation (9) can be written as

$$\begin{aligned} \partial _tv_i = \sum \limits _{\begin{array}{c} \scriptstyle k = 1\\ \scriptstyle k \ne i \end{array}}^N {\tilde{w}}_{i,k} \cdot \varPhi (v_k-v_i) - \sum \limits _{k = 1}^N {\tilde{w}}_{i,k} \cdot \varPhi (v_k+v_i), \end{aligned}$$
(34)

for \(i = 1,\ldots ,N\). We use this equation to derive the difference

$$\begin{aligned}&\partial _t \left( v_j - v_i \right) = \, ({\tilde{w}}_{j,i} + {\tilde{w}}_{i,j}) \cdot \varPhi (v_i - v_j)\nonumber \\&\qquad + \sum \limits _{\begin{array}{c} \scriptstyle k = 1\\ \scriptstyle k \ne i,j \end{array}}^N \bigg ( {\tilde{w}}_{j,k} \cdot \varPhi (v_k - v_j) \, - {\tilde{w}}_{i,k} \cdot \varPhi (v_k - v_i) \bigg )\nonumber \\&\qquad - \sum \limits _{k = 1}^N \bigg ( {\tilde{w}}_{j,k} \cdot \varPhi (v_k + v_j) \, - {\tilde{w}}_{i,k} \cdot \varPhi (v_k + v_i) \bigg ), \end{aligned}$$
(35)

where \(1 \le i,j \le N\). Assume w.l.o.g. that \(v_j > v_i\) and consider (35) in the limit \(v_j-v_i\rightarrow 0\). Then, we have

$$\begin{aligned} \lim \limits _{v_j-v_i\rightarrow 0} ({\tilde{w}}_{j,i} + {\tilde{w}}_{i,j}) \cdot \varPhi (v_i - v_j) > 0, \end{aligned}$$
(36)

if \({\tilde{w}}_{j,i} + {\tilde{w}}_{i,j} > 0\), which every global model fulfils by the assumption that \({\tilde{w}}_{i,j} > 0\) for \(1 \le i,j \le N\). This follows from the fact that \(\varPhi (s) > 0\) for \(s \in (-1,0)\). Furthermore, we have

$$\begin{aligned}&\lim \limits _{v_j-v_i\rightarrow 0} \sum \limits _{\begin{array}{c} \scriptstyle k = 1\\ \scriptstyle k \ne i,j \end{array}}^N \bigg ( {\tilde{w}}_{j,k} \cdot \varPhi (v_k - v_j) - {\tilde{w}}_{i,k} \cdot \varPhi (v_k - v_i) \bigg )\nonumber \\&\qquad - \sum \limits _{k = 1}^N \bigg ( {\tilde{w}}_{j,k} \cdot \varPhi (v_k + v_j) - {\tilde{w}}_{i,k} \cdot \varPhi (v_k + v_i) \bigg )\nonumber \\&\quad = \sum \limits _{\begin{array}{c} \scriptstyle k = 1\\ \scriptstyle k \ne i,j \end{array}}^N ({\tilde{w}}_{j,k} - {\tilde{w}}_{i,k}) \cdot \varPhi (v_k - v_i) \nonumber \\&\qquad - \sum \limits _{k = 1}^N ({\tilde{w}}_{j,k} - {\tilde{w}}_{i,k}) \cdot \varPhi (v_k + v_i)\nonumber \\&\quad = \,\, 0 \quad \text {if }{\tilde{w}}_{j,k} = {\tilde{w}}_{i,k}\text { for }1 \le k \le N. \end{aligned}$$
(37)

In conclusion, we can guarantee for global models with distinct particle positions that

$$\begin{aligned} \lim \limits _{v_j-v_i\rightarrow 0} \partial _t \left( v_j - v_i \right) > 0, \end{aligned}$$
(38)

if \({\tilde{w}}_{j,k} = {\tilde{w}}_{i,k}\) where \(1 \le i,j,k \le N\) and \(i \ne j\). According to (38), \(v_j\) will always start moving away from \(v_i\) (and vice versa) when the difference between both gets sufficiently small. Since the initial positions are distinct, it follows that \(v_i \ne v_j\) for \(i \ne j\) for all times t.\(\square \)

A special case occurs if all entries of the weight matrix \(\varvec{{\tilde{W}}}\) are set to 1—i.e. \(\varvec{{\tilde{W}}} = \textit{\textbf{1}}\textit{\textbf{1}}^{\mathrm {T}}\) with \(\textit{\textbf{1}} := ( 1, \ldots , 1)^{\mathrm {T}}\in {\mathbb {R}}^N\). For this scenario, we obtain an analytic steady-state solution which is independent of the penaliser \(\varPsi \):

Theorem 5

(Analytic Steady-State Solution for \(\varvec{{\tilde{W}}} = {\mathbf {1}}{\mathbf {1}}^{\mathrm {T}}\)) Under the assumption that \((v_i)\) is in increasing order, \(\varvec{{\tilde{W}}}={\mathbf {1}}{\mathbf {1}}^{\mathrm {T}}\), and that \(\varPsi (s^2)\) is twice continuously differentiable in (0, 2) the unique minimiser of (4) is given by \({\textit{\textbf{v}}}^*=(v_1^*,\ldots ,v_{2N}^*)^{\mathrm {T}}\), \(v_i^*=(i-\nicefrac 12)/N\), \(i=1,\ldots ,2N\).

Proof

With \(\varvec{{\tilde{W}}}={\mathbf {1}}{\mathbf {1}}^{\mathrm {T}}\), Equation (4) can be rewritten without the redundant entries of \(\textit{\textbf{v}}\) as

$$\begin{aligned} \begin{aligned} E(\textit{\textbf{v}}) =&\sum \limits _{i = 1}^{N - 1} \sum \limits _{j = i + 1}^N \varPsi ( (v_j - v_i)^2 ) + \frac{1}{2} \cdot \sum \limits _{i = 1}^N \varPsi ( 4 v_i^2 )\\&+ \sum \limits _{i = 1}^{N - 1} \sum \limits _{j = i + 1}^N \varPsi ( (v_j + v_i)^2 ). \end{aligned} \end{aligned}$$
(39)

From this, one can verify by straightforward, albeit lengthy calculations that \(\varvec{\nabla }E(\textit{\textbf{v}}^*)=0\). Moreover, one finds that the Hessian of E at \(\textit{\textbf{v}}^*\) is

$$\begin{aligned} \mathrm {D}^2E(\textit{\textbf{v}}^*) = \sum \limits _{k=1}^{N} \textit{\textbf{A}}_k \varPhi ' \left( \frac{k}{N}\right) . \end{aligned}$$
(40)

Here, \(\textit{\textbf{A}}_k\) are sparse symmetric \(N\times N\)-matrices given by

$$\begin{aligned} \textit{\textbf{A}}_k&= 2 \textit{\textbf{I}} - \textit{\textbf{T}}_k - \textit{\textbf{T}}_{-k} + \textit{\textbf{H}}_{k+1} + \textit{\textbf{H}}_{2N-k+1}, \end{aligned}$$
(41)
$$\begin{aligned} \textit{\textbf{A}}_N&= \textit{\textbf{I}} + \textit{\textbf{H}}_{N+1}, \end{aligned}$$
(42)

for \(k=1,\ldots ,N-1\), where the unit matrix \(\textit{\textbf{I}}\), the single-diagonal Toeplitz matrices \(\textit{\textbf{T}}_k\), and the single-antidiagonal Hankel matrices \(\textit{\textbf{H}}_k\) are defined as

$$\begin{aligned} \textit{\textbf{I}}&= \bigl (\delta _{i,j}\bigr )_{i,j=1}^N , \end{aligned}$$
(43)
$$\begin{aligned} \textit{\textbf{T}}_k&= \bigl (\delta _{j-i,k}\bigr )_{i,j=1}^N , \end{aligned}$$
(44)
$$\begin{aligned} \textit{\textbf{H}}_k&= \bigl (\delta _{i+j,k}\bigr )_{i,j=1}^N . \end{aligned}$$
(45)

Here, \(\delta _{i,j}\) denotes the Kronecker symbol, \(\delta _{i,j}=1\) if \(i=j\), and \(\delta _{i,j}=0\) otherwise. All \(\textit{\textbf{A}}_k\), \(k=1,\ldots ,N\) are weakly diagonally dominant with positive diagonal, thus positive semidefinite by Gershgorin’s Theorem. Moreover, the tridiagonal matrix \(\textit{\textbf{A}}_1\) is of full rank, thus even positive definite. By strict convexity of \(\varPsi (s^2)\), all \(\varPhi '(\nicefrac {k}{N})\) are positive; thus, \(\mathrm {D}^2E(\textit{\textbf{v}}^*)\) is positive definite.

As a consequence, the steady state of the gradient descent (9) for any initial data \(\textit{\textbf{f}}\) (with arbitrary rank-order) can—under the condition that \(\varvec{{\tilde{W}}} = \textit{\textbf{1}}\textit{\textbf{1}}^{\mathrm {T}}\)—be computed directly by sorting the \(f_i\): let \(\sigma \) be the permutation of \(\{1,\ldots ,N\}\) for which \((f_{\sigma ^{-1}(i)})_{i=1,\ldots ,N}\) is increasing (this is what a sorting algorithm computes), the steady state is given by \(v_i^*=(\sigma (i)-\nicefrac 12)/N\) for \(i=1,\ldots ,N\) (cf. Fig. 3).\(\square \)

Fig. 3
figure 3

Application of the global model to a system of seven particles with weight matrix \(\varvec{{\tilde{W}}} = \textit{\textbf{1}}\textit{\textbf{1}}^{\mathrm {T}}\)

Fig. 4
figure 4

Application of the global model to a system of seven particles with \({\tilde{w}}_{i,k} = \nicefrac {1}{k}\) for \(1 \le i,k \le N\)

Additionally, we present an analytic expression for the steady state of the global model given a penaliser function \(\varPsi = \varPsi _{a,n}\) (cf. Table 1) with \(n=1\).

Theorem 6

(Analytic Steady-State Solution for \(\varPsi = \varPsi _{a,n=1}\)) Given N distinct positions \(v_i\) in increasing order and a penaliser function \(\varPsi = \varPsi _{a,n=1}\), the unique minimiser of (4) is given by

$$\begin{aligned} v_i^* = \frac{\sum \limits _{j = 1}^i {\tilde{w}}_{i,j} - \frac{1}{2} {\tilde{w}}_{i,i}}{\sum \limits _{j = 1}^N {\tilde{w}}_{i,j}} , \qquad i = 1,\ldots ,N. \end{aligned}$$
(46)

Proof

The presented minimiser follows directly from (33). Figure 4 provides an illustration of the steady state. \(\square \)

Finally, and in case all entries of the weight matrix \(\varvec{{\tilde{W}}}\) are set to 1, we show that the global model converges—independently of \(\varPsi \)—to a unique steady state:

Theorem 7

(Convergence for \(\varvec{{\tilde{W}}} = {\mathbf {1}}{\mathbf {1}}^{\mathrm {T}}\)) Given that \(\varvec{{\tilde{W}}} = {\mathbf {1}}{\mathbf {1}}^{\mathrm {T}}\), any initial configuration \(\textit{\textbf{v}} \in (0,1)^N\) with distinct entries converges to a unique steady state \(\textit{\textbf{v}}^*\) for \(t \rightarrow \infty \). This is the global minimiser of the energy given in (6).

Proof

Using the same reasoning as in the proof for Theorem 3, we know that inequality (32) holds. Due to the positive definiteness of (40), it follows that \(E(\textit{\textbf{v}},\varvec{{\tilde{W}}})\) has a strict (global) minimum which implies that the inequality in (32) becomes strict except in case of \(\textit{\textbf{v}} = \textit{\textbf{v}}^*\). This guarantees asymptotic Lyapunov stability of \(\textit{\textbf{v}}^*\) and thus convergence to \(\textit{\textbf{v}}^*\) for \(t \rightarrow \infty \). \(\square \)

3.3 Relation to Variational Signal and Image Filtering

Let us now interpret \(v_1,\ldots ,v_N\) as samples of a smooth 1D signal \(u:\varOmega \rightarrow [0,1]\) over an interval \(\varOmega \) of the real axis, taken at sampling positions \(x_i=x_0+i\,h\) with grid mesh size \(h>0\). We consider the model (4) with \(w_{i,j}:=\gamma (|x_j-x_i|)\), where \(\gamma : {\mathbb {R}}_0^+ \rightarrow [0,1]\) is a non-increasing weighting function with compact support \([0,\varrho )\).

Theorem 8

(Space-Continuous Energy) Equation (4) can be considered as a discretisation of

$$\begin{aligned} E[u] = \frac{1}{2} \int \limits _\varOmega \bigl (W (u_x^2) + B (u)\bigr )\,\mathrm {d}x \end{aligned}$$
(47)

with penaliser \(W(u_x^2)\approx C\,\varPsi (u_x^2)\) and barrier function \(B(u)\approx D\,\varPsi (4u^2)\), where C and D are positive constants.

Remark 2

  1. (a)

    The penaliser W is decreasing and convex in \(u_x\). The barrier function B is convex, and it enforces the interval constraint on u by favouring values u away from the interval boundaries. The discrete penaliser \(\varPsi \) generates both the penaliser W for derivatives and the barrier function B.

  2. (b)

    Note that by construction of W, the diffusivity

    \(g(u_x^2) := W'(u_x^2)\sim \varPsi '(u_x^2)\) has a singularity at 0 with \(-\infty \) as limit.

  3. (c)

    The cut-off of \(\gamma \) at radius \(\varrho \) implies the locality of the functional (47) that can thereby be linked to a diffusion equation of type (1). Without a cut-off, a nonlocal diffusion equation would arise instead.

Proof of Theorem 8

We notice first that \(v_j-v_i\) and \(v_i+v_j\) for \(1\le i,j\le N\) are first-order approximations of \((j-i)\,h\,u_x(x_i)\) and \(2 u(x_i)\), respectively.

Derivation of the PenaliserW Assume first for simplicity that \(\varPsi (s^2)=-\kappa s\), \(\kappa >0\) is linear in s on [0, 1] (thus not strictly convex). Then, we have for a part of the inner sums of (4) corresponding to a fixed i:

$$\begin{aligned} \begin{aligned}&\frac{1}{2} \biggl ( \sum \limits _{j=1}^N \gamma (|x_j-x_i|) \cdot \varPsi \bigl ((v_j-v_i)^2\bigr ) \\&\qquad + \sum \limits _{{j=N+1}}^{2N} \gamma (|x_j-x_{2N+1-i}|) \cdot \varPsi \bigl ((v_j-v_{2N+1-i})^2\bigr ) \biggr ) \\&\quad = \sum \limits _{j=1}^N \gamma (|x_j - x_i|) \cdot \varPsi \bigl ((|v_j-v_i|)^2\bigr )\\&\quad \approx -\kappa \, h \, u_x(x_i) \sum \limits _{j=1}^N \gamma (|j-i| \, h) \cdot |j-i|\\&\quad = h \, \varPsi \bigl (u_x(x_i)^2\bigr ) \sum \limits _{k=1-i}^{N-i} |k| \, \gamma (|k|\,h)\\&\quad \approx h \varPsi (u_x (x_i)^2) \cdot \frac{2}{h^2} \int _0^\varrho z \gamma (z) \mathrm {d}z\\&\quad =: h C \varPsi (u_x (x_i)^2), \end{aligned} \end{aligned}$$
(48)

where in the last step the sum over \(k=1-i,\ldots ,N-i\) has been replaced with a sum over \(k = -\lfloor \varrho /h \rfloor , \dots , \lfloor \varrho /h \rfloor \), thus introducing a cut-off error for those locations \(x_i\) that are within the distance \(\varrho \) from the interval ends. Summation over \(i=1,\ldots ,N\) approximates \(\int _\varOmega C \varPsi (u_x^2) \mathrm {d}x\) from which we can read off \(W(u_x^2)\approx C \varPsi (u_x^2)\).

For \(\varPsi (s^2)\) that are nonlinear in s, \(\varPsi (u_x(x_i)^2)\) in (48) is changed into a weighted sum of \(\varPsi \bigl ((k u_x(x_i))^2\bigr )\) for \(k=1,\ldots ,N-1\), which still amounts to a decreasing function \(W(u_x^2)\) that is convex in \(u_x\). Qualitatively, \(W'\) then behaves the same way as before.

Derivation of the Barrier FunctionB Collecting the summands of (4) that were not used in (48), we have, again for fixed i,

$$\begin{aligned} \begin{aligned}&\frac{1}{2} \biggl ( \sum \limits _{{j = N+1}}^{2N} \gamma \bigl (|x_j-x_i|\bigr ) \cdot \varPsi \bigl ((v_j-v_i)^2\bigr ) \\&\qquad +\sum \limits _{j = 1}^{N} \gamma \bigl (|x_j-x_{2N+1-i}|\bigr ) \cdot \varPsi \bigl ((v_j-v_{2N+1-i})^2\bigr ) \biggr ) \\&\quad = \sum \limits _{j = 1}^{N} \gamma \bigl (|x_j-x_i|\bigr ) \cdot \varPsi \bigl ((v_i+v_j)^2\bigr )\\&\quad \approx \left( \frac{2}{h} \int _0^\varrho \gamma (z) \mathrm {d} z + 1 \right) \cdot \varPsi (4 u(x_i)^2)\\&\quad =: h D \cdot \varPsi (4 u(x_i)^2), \end{aligned} \end{aligned}$$
(49)

and thus after summation over i analogous to the previous step \(\int _\varOmega B(u)\,\mathrm {d}x\) with \(B(u) \approx D \varPsi (4 u^2)\).\(\square \)

Similar derivations can be made for patches of 2D images. A point worth noticing is that the barrier function B is bounded. This differs from usual continuous models where such barrier functions tend to infinity at the interval boundaries. However, for each given sampling grid and patch size the barrier function is just strong enough to prevent W from pushing the values out of the interval.

4 Explicit Time Discretisation

Up to this point, we have established a theory for the time-continuous evolution of particle positions. In order to be able to employ our model in simulations and applications, we need to discretise (9) in time. Subsequently, we provide a simple yet powerful discretisation which preserves all important properties of the time-continuous model. An approximation of the time derivative in (9) by forward differences yields the explicit scheme

$$\begin{aligned} \begin{aligned} v_i^{k+1} =&\,\, v_i^k + \tau \cdot \sum \limits _{\ell \in J_2^i} {\tilde{w}}_{i,\ell } \cdot \varPhi (v_\ell ^k-v_i^k)\\&- \tau \cdot \sum \limits _{\ell = 1}^N {\tilde{w}}_{i,\ell } \cdot \varPhi (v_\ell ^k+v_i^k), \end{aligned} \end{aligned}$$
(50)

for \(i = 1,\ldots ,N\), where \(\tau \) denotes the time step size and an upper index k refers to the time \(k\tau \). In the following, we derive necessary conditions for which the explicit scheme preserves the position range (0, 1) and the position ordering. Furthermore, we show convergence of (50) in dependence of \(\tau \).

Theorem 9

(Avoidance of Range Interval Boundaries of the Explicit Scheme) Let \(L_\varPhi \) be the Lipschitz constant of \(\varPhi \) restricted to the interval (0, 2). Moreover, let \(0< v_i^k < 1\), for every \(1 \le i \le N\), and assume that the time step size \(\tau \) of the explicit scheme (50) satisfies

$$\begin{aligned} 0< \tau < \frac{1}{2 \cdot L_\varPhi \cdot \max \limits _{1 \le i \le N} \sum \limits _{\ell = 1}^N {\tilde{w}}_{i,\ell }} . \end{aligned}$$
(51)

Then, it follows that \(0< v_i^{k+1} < 1\) for every \(1 \le i \le N\).

Proof

In accordance with (24), the explicit scheme (50) can be written as

$$\begin{aligned} v_i^{k+1} =&\,\, v_i^k + \tau \cdot \sum \limits _{\ell \in J_2^i} {\tilde{w}}_{i,\ell } \cdot \bigg ( \varPhi (v_\ell ^k-v_i^k) - \varPhi (v_\ell ^k+v_i^k) \bigg )\nonumber \\&- \tau \cdot \sum \limits _{\ell \in J_3^i} {\tilde{w}}_{i,\ell } \cdot \varPhi (2v_i^k), \end{aligned}$$
(52)

where \(i = 1,\ldots ,N\). Now assume that \(0< v_i^k, v_j^k < 1\) and let us examine the contribution of the two summation terms in (52). We need to distinguish the following five cases:

1. If \(v_i^k = v_j^k \le \frac{1}{2}\), then \(2v_i^k \in (0,1]\). Thus,

$$\begin{aligned} 0 \le -\varPhi (2v_i^k). \end{aligned}$$
(53)

2. If \(\frac{1}{2} < v_i^k = v_j^k\) then \(2v_i^k \in (1,2)\). Thus, using \(\varPhi (1) = 0\),

$$\begin{aligned} \begin{aligned} |\varPhi (2v_i^k)| = \,\,&|\varPhi (2v_i^k) - \varPhi (1)|\\ \le \,\,&|2v_i^k - 1| \cdot L_\varPhi \\ < \,\,&2v_i^k \cdot L_\varPhi . \end{aligned} \end{aligned}$$
(54)

3. If \(v_i^k < v_j^k\) then \(v_j^k - v_i^k, \, v_j^k + v_i^k \in (0,2)\). Thus,

$$\begin{aligned} |\varPhi (v_j^k+v_i^k)-\varPhi (v_j^k-v_i^k)| \le L_\varPhi \cdot 2v_i^k. \end{aligned}$$
(55)

4. If \(v_j^k < v_i^k \le \frac{1}{2}\) then \(v_j^k - v_i^k \in (-1,0)\) and \(v_j^k + v_i^k \in (0,1)\). Thus,

$$\begin{aligned} 0&\le \varPhi (v_j^k-v_i^k) - \varPhi (v_j^k+v_i^k), \end{aligned}$$
(56)
$$\begin{aligned} 0&\le -\varPhi (2v_i^k). \end{aligned}$$
(57)

5. Finally, if \(v_j^k < v_i^k\) and \(\frac{1}{2} < v_i^k\), using the periodicity of \(\varPhi \) we get

$$\begin{aligned}&|\varPhi (v_j^k-v_i^k)-\varPhi (v_j^k+v_i^k)| \nonumber \\&\quad = |\varPhi (v_j^k+v_i^k)-\varPhi (2+v_j^k-v_i^k)| \nonumber \\&\quad \le 2v_i^k \cdot L_\varPhi . \end{aligned}$$
(58)

Combining (50) with (51) and (53)–(58), we obtain that

$$\begin{aligned} v_i^{k+1} - v_i^k =&-\tau \cdot \sum \limits _{\ell \in J_2^i} {\tilde{w}}_{i,\ell } \cdot \bigg ( \varPhi (v_\ell ^k+v_i^k) - \varPhi (v_\ell ^k-v_i^k) \bigg )\nonumber \\&- \tau \cdot \sum \limits _{\ell \in J_3^i} {\tilde{w}}_{i,\ell } \cdot \varPhi (2v_i^k)\nonumber \\ \ge&-\tau \cdot L_\varPhi \cdot 2v_i^k \cdot \sum \limits _{\ell = 1}^N {\tilde{w}}_{i,\ell }\nonumber \\ >&-v_i^k, \end{aligned}$$
(59)

from which it directly follows that \(v_i^{k+1} > 0\), as claimed.

The proof for \(v_i^{k+1} < 1\) is straightforward. Assume w.l.o.g. that \({\tilde{v}}_i^k := 1-v_i^k\). For the reasons given above, we obtain \({\tilde{v}}_i^{k+1} > 0\). Consequently, \(1 - v_i^{k+1} > 0\) and \(v_i^{k+1} < 1\) follows.\(\square \)

Theorem 10

(Rank-Order Preservation of the Explicit Scheme) Let \(L_\varPhi \) be the Lipschitz constant of \(\varPhi \) restricted to the interval (0, 2). Furthermore, let \(v_i^0\), for \(i = 1,\ldots ,N\), denote the initially distinct positions in (0, 1) and —in accordance with Theorem 4—let the weight matrix \(\varvec{{\tilde{W}}}\) have constant columns, i.e. \({\tilde{w}}_{j,\ell } = {\tilde{w}}_{i,\ell }\) for \(1 \le i,j,\ell \le N\). Moreover, let \(0< v_i^k< v_j^k < 1\) and assume that the time step size \(\tau \) used in the explicit scheme (50) satisfies

$$\begin{aligned} 0< \tau < \frac{1}{2 \cdot L_\varPhi \cdot \max \limits _{1 \le i \le N} \sum \limits _{\ell = 1}^N {\tilde{w}}_{i,\ell }}. \end{aligned}$$
(60)

Then, we have \(v_i^{k+1} < v_j^{k+1}\).

Proof

For distinct positions, (50) reads

$$\begin{aligned} \begin{aligned} v_i^{k+1} =&\,\, v_i^k + \tau \cdot \sum \limits _{\begin{array}{c} \scriptstyle \ell = 1\\ \scriptstyle \ell \ne i \end{array}}^N {\tilde{w}}_{i,\ell } \cdot \varPhi (v_\ell ^k-v_i^k)\\&- \tau \cdot \sum \limits _{\ell = 1}^N {\tilde{w}}_{i,\ell } \cdot \varPhi (v_\ell ^k+v_i^k) \end{aligned} \end{aligned}$$
(61)

for \(i = 1,\ldots ,N\). Considering this explicit discretisation for \(\partial _t v_i\) and \(\partial _t v_j\), we obtain for \(i,j \in \{1,2,\ldots ,N\}\):

$$\begin{aligned} v_j^{k+1}-v_i^{k+1}= & {} \,\, v_j^k - v_i^k \nonumber \\&+ \tau \cdot ({\tilde{w}}_{j,i} + {\tilde{w}}_{i,j}) \cdot \varPhi (v_i^k - v_j^k)\nonumber \\&+ \tau \cdot \sum \limits _{\begin{array}{c} \scriptstyle \ell = 1\\ \scriptstyle \ell \ne i,j \end{array}}^N \bigg ( {\tilde{w}}_{j,\ell } \cdot \varPhi (v_\ell ^k - v_j^k) \nonumber \\&- {\tilde{w}}_{i,\ell } \cdot \varPhi (v_\ell ^k - v_i^k) \bigg ) \nonumber \\&- \, \tau \cdot \sum \limits _{\ell = 1}^N \bigg ( {\tilde{w}}_{j,\ell } \cdot \varPhi (v_\ell ^k + v_j^k) \nonumber \\&- {\tilde{w}}_{i,\ell } \cdot \varPhi (v_\ell ^k + v_i^k) \bigg ). \end{aligned}$$
(62)

Now remember that \(v_i^k < v_j^k\) by assumption and that—as a consequence—

$$\begin{aligned} \tau \cdot ({\tilde{w}}_{j,i} + {\tilde{w}}_{i,j}) \cdot \varPhi (v_i^k - v_j^k) > 0. \end{aligned}$$
(63)

Using the fact that \({\tilde{w}}_{j,k} = {\tilde{w}}_{i,k}\) for \(1 \le i,j,k \le N\) and that \(\varPhi \) is Lipschitz in (0, 2), we also know that

$$\begin{aligned} T_1&:= \tau \cdot \sum \limits _{\begin{array}{c} \scriptstyle \ell = 1\\ \scriptstyle \ell \ne i,j \end{array}}^N \bigg | {\tilde{w}}_{j,\ell } \cdot \varPhi (v_\ell ^k - v_j^k) - {\tilde{w}}_{i,\ell } \cdot \varPhi (v_\ell ^k - v_i^k) \bigg |\nonumber \\&= \tau \cdot \sum \limits _{\begin{array}{c} \scriptstyle \ell = 1\\ \scriptstyle \ell \ne i,j \end{array}}^N {\tilde{w}}_{j,\ell } \cdot \bigg | \varPhi (v_\ell ^k - v_j^k) - \varPhi (v_\ell ^k - v_i^k) \bigg |\nonumber \\&\le \tau \cdot L_\varPhi \cdot |v_i^k - v_j^k| \cdot \sum \limits _{\begin{array}{c} \scriptstyle \ell = 1\\ \scriptstyle \ell \ne i,j \end{array}}^N {\tilde{w}}_{j,\ell } , \end{aligned}$$
(64)
$$\begin{aligned} T_2&:= \tau \cdot \sum \limits _{\ell = 1}^N \bigg | {\tilde{w}}_{j,\ell } \cdot \varPhi (v_\ell ^k + v_j^k) - {\tilde{w}}_{i,\ell } \cdot \varPhi (v_\ell ^k + v_i^k) \bigg |\nonumber \\&= \tau \cdot \sum \limits _{\ell = 1}^N {\tilde{w}}_{j,\ell } \cdot \bigg | \varPhi (v_\ell ^k + v_j^k) - \varPhi (v_\ell ^k + v_i^k) \bigg |\nonumber \\&\le \tau \cdot L_\varPhi \cdot |v_j^k-v_i^k| \cdot \sum \limits _{\ell = 1}^N {\tilde{w}}_{j,\ell }. \end{aligned}$$
(65)

Let the time step size \(\tau \) fulfil (60). Then, we can write

$$\begin{aligned} T_1 + T_2< 2 \cdot L_\varPhi \cdot 2 \cdot |v_j^k-v_i^k| \cdot \sum \limits _{\ell = 1}^N {\tilde{w}}_{j,\ell } < v_j^k-v_i^k. \end{aligned}$$
(66)

In combination with \(T_1, T_2 \ge 0\), it follows that

$$\begin{aligned} T_2 - T_1 \ge -T_2 - T_1 > -(v_j^k - v_i^k), \end{aligned}$$
(67)

and we immediately know that \(v_j^k - v_i^k - T_1 + T_2 > 0\). Together with (62) and (63), we get \(0 < v_j^{k+1} - v_i^{k+1}\), as claimed.\(\square \)

Theorem 11

(Convergence of the Explicit Scheme) Let (6) be a twice continuously differentiable convex function. Then, the explicit scheme (50) converges for time step sizes

$$\begin{aligned} 0< \tau \le \frac{1}{2 \cdot L_{\varPhi } \cdot \max \limits _{1 \le i \le N} \sum \limits _{j=1}^N {\tilde{w}}_{i,j}} < \frac{2}{L}, \end{aligned}$$
(68)

where \(L_\varPhi \) denotes the Lipschitz constant of \(\varPhi \) restricted to the interval (0, 2) and L refers to the Lipschitz constant of the gradient of (6).

Proof

Convergence of the gradient method to the global minimum of \(E(\textit{\textbf{v}},\tilde{\textit{\textbf{W}}})\) is well known for continuously differentiable convex functions with Lipschitz continuous gradient and time step sizes \(0< \tau < 2 / L\) (see e.g. [33, Theorem 2.1.14]). A valid Lipschitz constant is given by \(L_{\mathrm {max}}\) as defined in (21). Consequently, the time step sizes \(\tau \) need to fulfil (68) in order to ensure convergence of (50). The smaller or equal relation results from (21). The latter defines \(L_{\mathrm {max}} > L\) such that \(\tau = 2/L_{\mathrm {max}}\) represents a valid time step size.\(\square \)

Remark 3

(Optimal Time Step Size) The optimal time step size, i.e. the value of \(\tau \) which leads to most rapid descent, is given by \(\tau = 1/L\) (see e.g. [33, Section 2.1.5]). Thus, we suggest to use \(\tau = 1/L_{\mathrm {max}}\).

5 Application to Image Enhancement

Now that we have presented a stable and convergent numerical scheme, we apply (50) to enhance the contrast of digital greyscale and colour images. Throughout all experiments, we use \(\varPsi = \varPsi _{1,1}\) (cf. Table 1 and Fig. 2).

5.1 Greyscale Images

The application of the proposed model to greyscale images follows the ideas presented in [4]. We define a digital greyscale image as a mapping \(f: \{1,\ldots ,n_x\} \times \{1,\ldots ,n_y\} \rightarrow [0,1]\). Note that all grey values are mapped to the interval (0, 1) to ensure the validity of our model before processing. The grid position of the i-th image pixel is given by the vector \(\textit{\textbf{x}}_i\), whereas \(v_i\) denotes the corresponding grey value. Subsequently, we will see that a well-considered choice of the weighting matrix \(\varvec{{\tilde{W}}}\) allows either to enhance the global or the local contrast of a given image.

5.1.1 Global Contrast Enhancement

For global contrast enhancement, we make use of the global model as discussed in Sect. 3.2. Only the N different occurring grey values \(v_i\)—and not their positions in the image —are considered. We let every entry \({\tilde{w}}_{i,j}\) of the weighting matrix denote the frequency of grey value \(v_j\) in the image. Assuming an 8-bit greyscale image, this leads to a weighting matrix of size \(256 \times 256\) which is independent of the image size. As illustrated in Fig. 5, global contrast enhancement can be achieved in two ways: as a first option one can use the explicit scheme (50) to describe the evolution of all grey values \(v_i\) up to some time t (see column two of Fig. 5). The amount of contrast enhancement grows with increasing values of t. In our experiments, an image size of \(481 \times 321\) pixels and the application of the flux function \(\varPhi _{1,1}\) with \(L_\varPhi =1\) imply an upper bound of \(1/(2 \cdot 481 \cdot 321)\) for \(\tau \). Thus, we can achieve the time \(t = 2 \cdot 10^{-6}\) in Fig. 5 in a single iteration. If one is only interested in an enhanced version of the original image with maximum global contrast, there is an alternative, namely the derived steady-state solution for linear flux functions (46). The results are shown in the last column of Fig. 5. This figure also confirms that the solution of the explicit scheme (50) converges to the steady-state solution (46) for \(t \rightarrow \infty \). From (46), it is clear that this steady state is equivalent to histogram equalisation. In summary, this means that the application of our global model to greyscale images offers an evolution equation for histogram equalisation which allows to control the amount of contrast enhancement in an intuitive way through the time parameter t.

Fig. 5
figure 5

Global contrast enhancement using \(\varPhi = \varPhi _{1,1}\) and greyscale versions of images from the BSDS500 [1]

5.1.2 Local Contrast Enhancement

In order to achieve local contrast enhancement, we use our model to describe the evolution of grey values \(v_i\) at all \(n_x \cdot n_y\) image grid positions. The change of every grey value \(v_i\) depends on all grey values within a disk-shaped neighbourhood of radius \(\varrho \) around its grid position \(\textit{\textbf{x}}_i\). We assume that

$$\begin{aligned} {\tilde{w}}_{i,j} := \gamma (|\textit{\textbf{x}}_j-\textit{\textbf{x}}_i|), \qquad \forall i,j \in \{1,2,\ldots ,N\}, \end{aligned}$$
(69)

where we weight the spatial distance \(|\textit{\textbf{x}}_j-\textit{\textbf{x}}_i|\) by a function \(\gamma : {\mathbb {R}}_0^+ \rightarrow [0,1]\) with compact support \([0,\varrho )\) which fulfils

$$\begin{aligned} \begin{aligned}&\gamma (x) \in (0,1],&\text {if } x < \varrho ,\\&\gamma (x) = 0,&\text {if } x \ge \varrho . \end{aligned} \end{aligned}$$
(70)

The choice of \(\gamma \) is application dependent. However, it usually makes sense to define \(\gamma (x)\) as a non-increasing function in x. Possible choices are, for example,

$$\begin{aligned} \gamma _1(x)&= {\left\{ \begin{array}{ll} 1, &{} \text {if } x < \varrho ,\\ 0, &{} \text {else}, \end{array}\right. } \end{aligned}$$
(71)
$$\begin{aligned} \gamma _2(x)&= {\left\{ \begin{array}{ll} 1 - 6 \frac{x^2}{\varrho ^2} + 6 \frac{x^3}{\varrho ^3}, &{} \text {if } 0 \le x< \frac{\varrho }{2},\\ 2 \cdot (1 - \frac{x}{\varrho })^3, &{} \text {if } \frac{\varrho }{2} \le x < \varrho ,\\ 0, &{} \text {else}, \end{array}\right. } \end{aligned}$$
(72)

which are both sketched in Fig. 6. When applying this local model to images, we make use of mirroring boundary conditions in order to avoid artefacts at the image boundaries. Figure 7 provides an example for local contrast enhancement of digital greyscale images. Again, we describe the grey value evolution with the explicit scheme (50). Furthermore, we use \(\gamma _1\) to model the influence of neighbouring grey values. As is evident from Fig. 7, increasing the values for t goes along with enhanced local contrast.

Fig. 6
figure 6

Box function \(\gamma _1\) and scaled cubic B-spline \(\gamma _2\)

Fig. 7
figure 7

Local contrast enhancement using \(\varPhi = \varPhi _{1,1}\), \(\gamma =\gamma _1\), \(\varrho = 60\), and greyscale versions of images from the BSDS500 [1]

5.2 Colour Images

Based on the assumption that our input data is given in sRGB colour space [48] (in the following denoted by RGB), we represent a digital colour image by the mapping \(f: \{1,\ldots ,n_x\} \times \{1,\ldots ,n_y\} \rightarrow [0,1]^3\). Subsequently, our aim is the contrast enhancement of digital colour images without distorting the colour information. This means that we only want to adapt the luminance but not the chromaticity of a given image. For this purpose, we convert the given image data to YCbCr colour space [44, Section 3.5] since this representation provides a separate luminance channel. Next, we perform contrast enhancement on the luminance channel only. Just as for greyscale images, we map all Y-values to the interval (0, 1) to fulfil our model requirements. After enhancing the contrast, we transform the colour information of the image back to RGB colour space.

At this point, it is important to mention that the colour gamut of the RGB colour space is a subset of the YCbCr colour gamut and during the conversion process of colour coordinates from YCbCr to RGB colour space, the so-called colour gamut problem may occur: Colours from the YCbCr colour gamut may lie outside the RGB colour gamut and thus cannot be represented in RGB colour coordinates. Naik and Murthy [32] state that a simple clipping of the values to the bounds creates undesired shift of hue and may lead to colour artefacts. In order to avoid the colour gamut problem, we adapt the ideas presented by Nikolova and Steidl [34] which are based on the intensity representation of the HSI colour space [19, Section 6.2.3]. Using the original and enhanced intensities, they define an affine colour mapping and transform the original RGB values. This preserves the hue and results in an enhanced RGB image. It is straightforward to show that their algorithms are valid for any intensity \({\hat{f}}\) of type

$$\begin{aligned} {\hat{f}} = c_r \cdot r + c_g \cdot g + c_b \cdot b, \end{aligned}$$
(73)

with \(c_r + c_g + c_b = 1\) and \(c_r, c_g, c_b \in [0,1]\), where r, g, and b denote RGB colour coordinates. Thus, they are applicable to the luminance representation of the YCbCr colour space, too, i.e. \(c_r=0.299\), \(c_g=0.587\), \(c_b=0.114\). Tian and Cohen make use of the same idea in [52]. As in [34], our result image is a convex combination of the outcomes of a multiplicative and an additive algorithm (see [34, Algorithm 4 and 5]) with coefficients \(\lambda \) and \(1-\lambda \) for \(\lambda \in [0,1]\). During our experiments, we use a fixed value of \(\lambda = 0.5\) (for details on how to choose \(\lambda \), we refer to [34]). An overview of our strategy for contrast enhancement of digital colour value images is given in Fig. 8.

Fig. 8
figure 8

Procedure of contrast enhancement for digital colour images following [34]

5.2.1 Global Contrast Enhancement

Again, we apply the global model from Sect. 3.2 in order to achieve global contrast enhancement. As mentioned before, we consider the N different occurring Y-values of the YCbCr representation of the input image and denote them by \(v_i\) (similar to Sect. 5.1.1 we neglect their positions in the image). Every entry of the weighting matrix \({\tilde{w}}_{i,j}\) contains the number of occurences of the value \(v_j\) in the Y-channel of the image. It becomes clear that the application of our model—in this setting—basically comes down to histogram equalisation of the Y-channel. Figure 9 shows the resulting RGB images after global contrast enhancement. Similar to the greyscale scenario, we can either apply the explicit scheme (50) or—for \(\varPhi = \varPhi _{a,1}\)— estimate the steady-state solution following (46). For the first case, the amount of contrast enhancement grows with the positive time parameter t. The second column of Fig. 9 shows the results for \(\varPhi = \varPhi _{1,1}\) given time t. The corresponding steady-state solutions are illustrated in the last column of Fig. 9.

Fig. 9
figure 9

Global contrast enhancement using \(\varPhi = \varPhi _{1,1}\), \(\lambda = 0.5\), and images from [27]

Fig. 10
figure 10

Local contrast enhancement using \(\varPhi = \varPhi _{1,1}\), \(\gamma =\gamma _1\), \(\varrho = 60\), \(\lambda = 0.5\), and images from [27]

5.2.2 Local Contrast Enhancement

In a similar manner—and adapting the ideas from Sect. 5.1.2—we achieve local contrast enhancement in colour images. For this purpose, we describe the evolution of Y-values \(v_i\) at all \(n_x \cdot n_y\) image grid positions using a disk-shaped neighbourhood of radius \(\varrho \) around the corresponding grid positions \(\textit{\textbf{x}}_i\). The entries of the weighting matrix \(\varvec{{\tilde{W}}}\) follow (69). In combination with mirrored boundary conditions, the explicit scheme (50) allows to increase the local contrast of an image with growing t. Figure 10 shows the exemplary results for \(\varPhi = \varPhi _{1,1}\) and \(\gamma = \gamma _1\) (cf. (71)). Note, how well— in comparison with the global set-up in Fig. 9—the structure of the door gets enhanced while the details of the door knob are preserved. The differences are even larger in the second image: for both the couple in the foreground and the background scenery, contrast increases which implies visibility also for larger times t.

5.3 Parameters

In total, our model has up to six parameters: \(\varPhi \), \(\tau \), t, \(\lambda \), \(\varrho \), and \(\gamma \). During our experiments, we have fixed \(\varPhi (s)\) to the linear flux function \(\varPhi _{1,1}(s)\) and \(\lambda \) to 0.5. Valid bounds for the time step size \(\tau \) are given in Theorems 911. From the theory in Sect. 3 and the subsequent experiments on greyscale and colour images, it becomes clear that the amount of contrast enhancement grows with the diffusion time. Thus, it remains to discuss the influence of the parameters \(\varrho \) and \(\gamma \). We found out that the neighbourhood radius \(\varrho \) affects the diffusion time and controls the amount of perceived local contrast enhancement, i.e. it steers the localisation of the contrast enhancement process. Whereas small radii lead to high contrast in already small image areas, the size of image sections with high contrast increases with \(\varrho \). For sufficiently large values of \(\varrho \), global histogram equalisation is approximated. Another interesting point is the choice of the weighting function \(\gamma \). Overall, choosing \(\gamma =\gamma _1\) leads to more homogeneous contrast enhancement resulting in smoother perception. For \(\gamma =\gamma _2\), the focus always lies on the neighbourhood centre which implies even more enhancement of local structures than in the preceding case. In summary, \(\gamma _2\) leads to more enhancement which, however, also creates undesired effects in smooth or noisy regions. Thus, we prefer \(\gamma _1\) over \(\gamma _2\). Further experiments which visualise the effect of the parameters can be found in the supplementary material.

5.4 Related Work from an Application Perspective

Now that we have demonstrated the applicability of our model to digital images we want to discuss briefly its relation to other existing theories in the context of image processing.

As mentioned in Sect. 5.1.1, applying the global model—with the entries of \(\varvec{{\tilde{W}}}\) representing the grey value frequencies—is identical to histogram equalisation (a common formulation can, for example, be found in [19]). Furthermore, there exist other closely related histogram specification techniques—such as [35, 36, 45]—which can have the same steady state. If we compare our evolution with the histogram modification flow introduced by Sapiro and Caselles [45], we see that their flow can also be translated into a combination of repulsion among grey values and a barrier function. However, in [45] the repulsive force is constant and the barrier function quadratic. Thus, they cannot be derived from the same kind of interaction between the \(v_i\) and their reflected counterparts as in our paper.

Referring to Sect. 5.1.2, there also exist well-known approaches which aim to enhance the local image contrast such as adaptive histogram equalisation—see [42] and the references therein—or contrast limited adaptive histogram equalisation [56]. The latter technique tries to overcome the over-amplification of noise in mostly homogeneous image regions when using adaptive histogram equalisation. Both approaches share the basic idea with our approach in Sect. 5.1.2 and perform histogram equalisation for each pixel, i.e. the mapping function for every pixel is determined using a neighbourhood of predefined size and its corresponding histogram.

Another related research topic is the rich field of colour image enhancement which we broach in Sect. 5.2. A short review of existing methods—as well as two new ideas—is presented in [2]. Therein, Bassiou and Kotropoulus also mention the colour gamut problem for methods which perform contrast enhancement in a different colour space and transform colour coordinates to RGB afterwards. Of particular interest are the publications by Naik and Murthy [32] and Nikolova and Steidl [34] whose ideas are used in Sect. 5.2. Both of them suggest—based on an affine colour transform—strategies to overcome the colour gamut problem while avoiding colour artefacts in the resulting image. A recent approach which also makes use of these ideas is presented by Tian and Cohen [52]. Ojo et al. [37] make use of the HSV colour space to avoid the colour gamut problem when enhancing the contrast of colour images. A variational approach for contrast enhancement which tries to approximate the hue of the input image was recently published by Pierre et al. [41].

6 Conclusions and Outlook

In our paper, we have presented a mathematical model which describes pure backward diffusion as gradient descent of strictly convex energies. The underlying evolution makes use of ideas from the area of collective behaviour and—in terms of the latter—our model can be understood as a fully repulsive discrete first-order swarm model. Not only it is surprising that our model allows backward diffusion to be formulated as a convex optimisation problem but also that it is sufficient to impose reflecting boundary conditions in the diffusion co-domain in order to guarantee stability. This strategy is contrary to existing approaches which either assume forward or zero diffusion at extrema or add classical fidelity terms to avoid instabilities. Furthermore, discretisation of our model does not require sophisticated numerics. We have proven that a straightforward explicit scheme is sufficient to preserve the stability of the time-continuous evolution. In our experiments, we show that our model can directly be applied to contrast enhancement of digital greyscale and colour images.

We see our contribution mainly as an example of stable modelling of backward parabolic evolutions that create neither theoretical nor numerical problems. We are convinced that this concept has far more widespread applications in inverse problems, image processing, and computer vision. Exploring them will be part of our future research.