Stable Backward Diffusion Models that Minimise Convex Energies

Backward diffusion processes appear naturally in image enhancement and deblurring applications. However, the inverse problem of backward diffusion is known to be ill-posed and straightforward numerical algorithms are unstable. So far, existing stabilisation strategies in the literature require sophisticated numerics to solve the underlying initial value problem. Therefore, it is desirable to establish a backward diffusion model which implements a smart stabilisation approach that can be used in combination with a simple numerical scheme. We derive a class of space-discrete one-dimensional backward diffusion as gradient descent of energies where we gain stability by imposing range constraints. Interestingly, these energies are even convex. Furthermore, we establish a comprehensive theory for the time-continuous evolution and we show that stability carries over to a simple explicit time discretisation of our model. Finally, we confirm the stability and usefulness of our technique in experiments in which we enhance the contrast of digital greyscale and colour images.


Introduction
Forward diffusion processes are well-suited to describe the smoothing of a given signal or image. This smoothing operation implies a loss of high frequencies or details in the original data. As a result, the inverse problem, backward diffusion, suffers from deficient information which is needed to uniquely reconstruct the original data. The introduction of noise due to measured data increases this difficulty even further. Consequently, a solution to the inverse problem -if it exists at all -is highly sensitive and heavily depends on the input data: Even the smallest flaw in the initial data can have a large impact on the evolution and may cause large deviations. Therefore, it becomes clear that backward diffusion processes necessitate further stabilisation.
Previous Work on Backward Diffusion. Already more than 60 years ago, John [26] discussed the quality of a numerical solution to the inverse diffusion problem given that a solution exists, and that it is bounded and non-negative. Since then, a large number of different regularisation methods have evolved which achieve stability by arXiv:1903.03491v1 [math.NA] 8 Mar 2019 degree [25]. Mair et al. [31] discuss the well-posedness of deblurring Gaussian blur in the discrete image domain based on analytic number theory.
In summary, the presented methods offer an insight into the challenge of handling backward diffusion in practice and underline the demand for careful stabilisation strategies and sophisticated numerical methods.
Diffusion and Energy Minimisation. In our paper we are going to present an alternative approach to deal with backward diffusion problems. To understand it better, it is useful to recapitulate some relations between diffusion and energy minimisation. For the sake of convenience we assume a one-dimensional evolution that smoothes an initial signal f : [a, b] → R. In this context, the original signal f serves as initial state of the diffusion equation where u = u(x, t) represents the filtered outcome with u(x, 0) = f (x). Additionally, let u x = ∂ x u and assume reflecting boundary conditions at x = a and x = b. Given a nonnegative diffusivity function g, growing diffusion times t lead to simpler representations of the input signal. From Perona and Malik's work [40] we know that the smoothing effect at signal edges can be reduced if g is a decreasing function of the contrast u 2 x . As long as the flux function Φ(u x ) := g(u 2 x ) u x is strictly increasing in u x the corresponding forward diffusion process ∂ t u = Φ (u x )u xx involves no edge sharpening. This diffusion can be regarded as the gradient descent evolution which minimises the energy The potential functionΨ (u x ) = Ψ (u 2 x ) is strictly convex in u x , increasing in u 2 x , and fulfils Ψ (u 2 x ) = g(u 2 x ). Furthermore, the energy functional has a flat minimiser which is -due to the strict convexity of the energy functional -unique. The gradient descent / diffusion evolution is well-posed and converges towards this minimiser for t → ∞. Due to this classical emergence of well-posed forward diffusion from strictly convex energies it seems natural to believe that backward diffusion processes are necessarily associated with non-convex energies. However, as we will see, this conjecture is wrong.
Our Contribution. In our article, we show that a specific class of backward diffusion processes are gradient descent evolutions of energies that have one unexpected property: They are convex! Our second innovation is the incorporation of a specific constraint: We impose reflecting boundary conditions in the diffusion co-domain. While such range constraints have shown their usefulness in some other context (see e.g. [34]), to our knowledge they have never been used for stabilising backward diffusions. For our novel backward diffusion models, we show also a surprising numerical fact: A simple explicit scheme turns out to be stable and convergent. Last but not least, we apply our models to the contrast enhancement of greyscale and colour images.
This article is a revised version of our conference contribution [4] which we extend in several aspects. First, we enhance our model for convex backward diffusion to support not only a global and weighted setting but also a localised variant. We analyse this extended model in terms of well-posedness, stability, and convergence towards a unique minimiser. Furthermore, we formulate a simple explicit scheme for our newly proposed approach which shares all important properties with the time-continuous evolution. In this context, we provide a detailed discussion on the selection of suitable time step sizes. Additionally, we suggest two new applications: global contrast enhancement of digital colour images and local contrast enhancement of digital grey and colour images.
Structure of the Paper. In Section 2, we present our model for convex backward diffusion with range constraints. We describe a general approach which allows to formulate weighted local and global evolutions. Section 3 includes proofs for model properties such as range and rank-order preservation as well as convergence analysis and the derivation of explicit steady-state solutions. Section 4 provides a simple explicit scheme which can be used to solve the occurring initial value problem. In Section 5, we explain how to enhance the global and local contrast of digital images using the proposed model. Furthermore, we discuss the relation to existing literature on contrast enhancement. In Section 6, we draw conclusions from our findings and give an outlook on future research.

Model
Let us now explore the roots of our model and derive -in a second step -the particle evolution which forms the heart of our method and which is given by the gradient descent of a convex energy.

Motivation from Swarm Dynamics
The idea behind our model goes back to the scenario of describing a one-dimensional evolution of particles within a closed system. Recent literature on mathematical swarm models employs a pairwise potential U : R n → R to characterise the behaviour of individual particles (see e.g. [12,11,17,15,10] and the references therein). The potential function allows to steer attractive and repulsive forces among swarm mates. Physically simplified models like [16] neglect inertia and describe the individual particle velocity ∂ t v i within a swarm of size N directly as where v i and v j denote particle positions in R n . These models are also referred to as first order models. Often they are inspired by biology and describe long-range attractive and short-range repulsive behaviour between swarm members. The interplay of attractive and repulsive forces leads to flocking and allows to gain stability for the whole swarm. Inverting this behaviour -resulting in short-range attractive and long-range repulsive forces -leads to a highly unstable scenario in which the swarm splits up into small separating groups which might never reach a point where they stand still. One would expect that a restriction to repulsive forces only will increase this instability even further. However, we will present a model which copes well with exactly this situation. In our set-up every particle moves within the open interval (0, 1) and has an interaction radius of size 1. Keeping this in mind, let us briefly examine the two main assumptions of the evolution. First, there exist reflections for all particles at the left and right domain boundary. Secondly, the particles repel each other and -furthermore -get repelled by the reflections. However, due to the limited viewing range, only one of the two reflections of a certain particle is considered at any given time, namely the one which is closer to the reference particle (see Figure 1). A special case occurs if the reference particle is located at position 0.5: the repulsive forces of both of its own reflections equal out.

Discrete Variational Model
We propose a dynamical system which has its roots in a spatial discretisation of the energy functional (2). Furthermore, we make use of a decreasing energy function Ψ : R + 0 → R and a global range constraint on u. The corresponding flux function Φ is defined as Φ(s) := Ψ (s 2 )s. Our goal is to describe the evolution of one-dimensional -not necessarily distinctparticle positions v i ∈ (0, 1), where i = 1, . . . , N . Therefore, we extend the position vector v = (v 1 , . . . , v N ) T with the additional coordinates v N +1 , . . . , v 2N defined as v 2N +1−i := 2 − v i ∈ (1, 2). This extended position vector v ∈ (0, 2) 2N allows to evaluate the energy function which models the repulsion potential between all positions v i and v j . The coefficient w i,j denotes entry j in row i of a constant non-negative weight matrix W = (w i,j ) ∈ (R + 0 ) 2N ×2N . It models the importance of the interaction between position v i and v j . All diagonal elements of the weight matrix are positive, i.e. w i,i > 0, ∀i ∈ {1, 2, . . . , 2N }. In addition, we assume that the weights for all extended positions are the same as those for the original ones. Namely, we have Ψa,n(s 2 ) Ψ a,n (s 2 ) Φa,n(s) a· (s−1) 2n −1 a·n s · (s − 1) 2n−1 a · n · (s − 1) 2n−1 Table 1. One exemplary class of penaliser functions Ψ (s 2 ) for s ∈ [0, 1] with n ∈ N, a > 0 and corresponding diffusivity Ψ (s 2 ) and flux Φ(s) functions.
For the penaliser function Ψ we impose several restrictions which we discuss subsequently. Table 1 shows one reasonable class of functions Ψ a,n as well as the corresponding diffusivities Ψ a,n and flux functions Φ a,n . In Figure 2 we provide an illustration of three functions using a = 1 and n = 1, 2, 3. The penaliser is constructed following a three step procedure. First, the function Ψ (s 2 ) is defined as a continuously differentiable, decreasing, and strictly convex function for s ∈ [0, 1] with Ψ (0) = 0 and Φ − (1) = 0 (left-sided derivative). Next, Ψ is extended to [−1, 1] by symmetry and to R by periodicity Ψ (2 + s) 2 = Ψ (s 2 ). This results in a penaliser Ψ (s 2 ) which is continuously differentiable everywhere except at even integers, where it is still continuous. Note that Ψ (s 2 ) is increasing on [−1, 0] and [1,2]. The flux Φ is continuous and increasing in (0, 2) with jump discontinuities at 0 and 2 (see Figure  2). Furthermore, we have that Φ(s) = −Φ(−s) and Φ(2 + s) = Φ(s). Exploiting the properties of Ψ allows us to rewrite (4) without the redundant entries v N +1 , . . . , v 2N as A gradient descent for (4) is given by where v i now are functions of the time t and J i and positive otherwise, thus driving v i always away from v j . This implies that we have negative diffusivities Ψ for all |v j − v i | < 1. Due to the convexity of Ψ (s 2 ), the absolute values of the repulsive forces Φ are decreasing with the distance between v i and v j . We remark that the jumps of Φ at 0 and 2 are not problematic here, as all positions v i and v j in the argument of Φ are distinct by the definition of J i 1 .
Let us discuss shortly how the interval constraint for the v i , i = 1, . . . , N , is enforced in (4) and (7). First, notice that v 2N +1−i for i = 1, . . . , N is the reflection of v i on the right interval boundary 1.
and v 2N +1−j away from the right interval boundary. The closer v i and v 2N +1−j come to this boundary, the stronger is the repulsion. possible cases, it becomes clear that every v i is either repelled from the reflection of v j at the left or at the right interval boundary, but never from both at the same time. (7), the symmetry of v is preserved. Dropping the redundant entries v N +1 , . . . , v 2N , Equation (7) can be rewritten as for i = 1, . . . , N , where the second sum emphasises the repulsions between original and reflected coordinates in a symmetric way. The set J i 2 is defined as (8) denotes a formulation for pure repulsion amongst N different positions v i with stabilisation being achieved through the consideration of their reflections at the domain boundary. It is worth mentioning that within (6) and (8) we only make use of the first N ×N entries of W . In the following, we denote this submatrix byW and refer to its elements asw i,j . Given an initial vector f ∈ (0, 1) N and initialising (7) and (8) evolves v towards a minimiser of E.

Theory
Below we provide a detailed analysis of the evolution and discuss its main properties. For this purpose we consider the Hessian of (6) whose entries for 1 ≤ i ≤ N read

General Results
In a first step, let us investigate the well-posedness of the underlying initial value problem in the sense of Hadamard [22].
Theorem 1 (Well-Posedness). Let Ψ = Ψ a,n as defined in Table 1. Then the initial value problem (8) is well-posed since (a) it has a solution, (b) the solution is unique, and (c) it depends continuously on the initial conditions.
Proof. The initial value problem (8) can be written aṡ with v(t) ∈ R 2N and t ∈ R + 0 where we make use of the fact that W is a constant weight matrix.
In case f (v(t)) is continuously differentiable and Lipschitz continuous all three conditions (a)-(c) hold. Existence and uniqueness directly follow from [39, chapter 3.1, Theorem 3]. Continuous dependence on the initial conditions is guaranteed due to [39, chapter 2.3, Theorem 1] which is based on Gronwall's Lemma [21]. Thus, let us now prove differentiability and Lipschitz continuity of f (v(t)).
Differentiability: Differentiability follows from the fact that all functions Φ a,n are continuously differentiable. As a consequence the partial derivatives of (8) w.r.t. v i exist for i = 1, . . . , 2N .
Lipschitz Continuity: The Gershgorin circle theorem [18] allows to estimate a valid Lipschitz constant L as an upper bound of the spectral radius of the Jacobian of (8). For 1 ≤ i ≤ N the entries read The radii of the Gershgorin discs fulfil (8). This leads to the bounds where L Φ represents the Lipschitz constant of the flux function Φ. Using the same reasoning one can show that Consequently, an upper bound for the spectral radius -and thus for the Lipschitz constant L of the gradient of (8) -reads For our specific class of flux functions Φ a,n a valid Lipschitz constant L Φ is given by such that we have This concludes the proof.
Next, let us show that no position v i can ever reach or cross the interval boundaries 0 and 1.
Theorem 2 (Avoidance of Range Interval Boundaries). For any weighting matrixW ∈ (R + 0 ) N ×N all N positions v i which evolve according to (8) and have an arbitrary initial value in (0, 1) do not reach the domain boundaries 0 and 1 for any time t ≥ 0.
Proof. Equation (8) can be written as where the latter follows from the periodicity of Φ. Consequently, any position v i which gets arbitrarily close to one of the domain boundaries 0 or 1 experiences no impact by positions v j with j ∈ J i 2 , and the first sum in (21) gets zero. The definition of Ψ (s 2 ) implies that Now remember thatW ∈ (R + 0 ) N ×N andw i,i > 0. In combination with (26) and (27) which concludes the proof.
Let us for a moment assume that the penaliser function is given by Ψ = Ψ a,n from Table 1. Below, we prove that this implies convergence to the global minimum of the energy E(v,W ).
Theorem 3 (Convergence for Ψ = Ψ a,n=1 ). For t → ∞, given a penaliser Ψ a,1 with arbitrary a > 0, any initial configuration v ∈ (0, 1) N converges to a unique steady state v * which is the global minimiser of the energy given in (6).
Proof. As a sum of convex functions, (6) is convex. Therefore, the function According to Gershgorin's theorem [18], one can show that the Hessian matrix of (6) is positive definite for Ψ = Ψ a,1 from which it follows that E(v,W ) has a strict (global) minimum. This implies that the inequality in (29) becomes strict except in case of v = v * , and guarantees asymptotic Lyapunov stability [30] of v * . Thus, we have convergence to v * for t → ∞.
Remark 1. Theorem 3 can be extended to the case of n = 2 and -in a weaker formulation -to arbitrary n ∈ N. The proofs for both cases are based on a straightforward application of the Gershgorin circle theorem. For details we refer to the supplementary material.
(a) Given that Ψ = Ψ a,n=2 , let us assume that one of the following two conditions is fulfilled for every i ∈ [1, N ] and t ≥ 0. Then the Hessian matrix of (6) is positive definite and convergence to the strict global minimum of E(v,W ) follows. (b) For all penaliser functions Ψ = Ψ a,n , one can show that the Hessian matrix of (6) is positive semi-definite. This means that our method converges to a global minimum of E(v,W ). However, this minimum does not have to be unique.
In general, the steady-state solution of (8) depends on the definition of the penaliser function Ψ . Based on (21), and assuming that Ψ = Ψ a,n , a minimiser of E(v,W ) necessarily fulfils the equation where i = 1, . . . , N .

Global Model
If all positions v i interact with each other during the evolution, i.e.w i,j > 0 for 1 ≤ i, j ≤ N , we speak of our model as acting globally. Below, we prove the existence of weight matricesW for which distinct positions v i and v j (with i = j) can never become equal (assuming that the positions v i , i = 1, . . . , N , are distinct for t = 0). This implies that the initial rank-order of v i is preserved throughout the evolution.
Theorem 4 (Distinctness of v i and v j ). Among N initially distinct positions v i ∈ (0, 1) evolving according to (8), no two ever become equal ifw j,k =w i,k > 0 for 1 ≤ i, j, k ≤ N, i = j.
Proof. Given N distinct positions v i ∈ (0, 1), equation (8) can be written as for i = 1, . . . , N . We use this equation to derive the difference where 1 ≤ i, j ≤ N . Assume w.l.o.g. that v j > v i and consider (32) in the limit v j − v i → 0. Then we have ifw j,i +w i,j > 0, which every global model fulfils by the assumption thatw i,j > 0 for 1 ≤ i, j ≤ N . This follows from the fact that Φ(s) > 0 for s ∈ (−1, 0). Furthermore, we have In conclusion, we can guarantee for global models with distinct particle positions that lim ifw j,k =w i,k where 1 ≤ i, j, k ≤ N and i = j. According to (35), v j will always start moving away from v i (and vice versa) when the difference between both gets sufficiently small. Since the initial positions are distinct, it follows that v i = v j for i = j for all times t.
A special case occurs if all entries of the weight matrixW are set to 1 -i.e. W = 11 T with 1 := (1, . . . , 1) T ∈ R N . For this scenario, we obtain an analytic steady-state solution which is independent of the penaliser Ψ : Theorem 5 (Analytic Steady-State Solution forW = 11 T ). Under the assumption that (v i ) is in increasing order,W = 11 T , and that Ψ (s 2 ) is twice continuously differentiable in (0, 2) the unique minimiser of (4) is given by (4) can be rewritten without the redundant entries of v as from which one can verify by straightforward, albeit lengthy calculations that ∇E(v * ) = 0, and the Hessian of E at v * is with sparse symmetric N × N -matrices with k = 1, . . . , N −1, where the unit matrix I, the single-diagonal Toeplitz matrices T k , and the single-antidiagonal Hankel matrices H k are defined as Here, δ i,j denotes the Kronecker symbol, δ i,j = 1 if i = j, and δ i,j = 0 otherwise. All A k , k = 1, . . . , N are weakly diagonally dominant with positive diagonal, thus positive semidefinite by Gershgorin's Theorem. Moreover, the tridiagonal matrix A 1 is of full rank, thus even positive definite. By strict convexity of Ψ (s 2 ), all Φ ( k /N) are positive, thus D 2 E(v * ) is positive definite. As a consequence, the steady state of the gradient descent (8) for any initial data f (with arbitrary rank-order) can -under the condition thatW = 11 T -be computed directly by sorting the f i : Let σ be the permutation of {1, . . . , N } for which (f σ −1 (i) ) i=1,...,N is increasing (this is what a sorting algorithm computes), the steady state is given by v * i = (σ(i) − 1 /2)/N for i = 1, . . . , N (cf. Figure 3).
Additionally, we present an analytic expression for the steady state of the global model given a penaliser function Ψ = Ψ a,n (cf. Table 1) with n = 1.
Theorem 6 (Analytic Steady-State Solution for Ψ = Ψ a,n=1 ). Given N distinct positions v i in increasing order and a penaliser function Ψ = Ψ a,n=1 , the unique minimiser of (4) is given by Proof. The presented minimiser follows directly from (30). Figure 4 provides an illustration of the steady state.
Finally, and in case all entries of the weight matrixW are set to 1, we show that the global model converges -independently of Ψ -to a unique steady state: Theorem 7 (Convergence forW = 11 T ). Given thatW = 11 T , any initial configuration v ∈ (0, 1) N with distinct entries converges to a unique steady state v * for t → ∞. This is the global minimiser of the energy given in (6).
Proof. Using the same reasoning as in the proof for Theorem 3 we know that inequality (29) holds. Due to the positive definiteness of (37) it follows that E(v,W ) has a strict (global) minimum which implies that the inequality in (29) becomes strict except in case of v = v * . This guarantees asymptotic Lyapunov stability of v * and thus convergence to v * for t → ∞.  Proof (of Theorem 8). We notice first that v j − v i and v i + v j for 1 ≤ i, j ≤ N are first-order approximations of (j − i) h u x (x i ) and 2u(x i ), respectively.

Relation to Variational Signal and Image Filtering
Derivation of the Penaliser W . Assume first for simplicity that Ψ (s 2 ) = −κs, κ > 0 is linear in s on [0, 1] (thus not strictly convex). Then we have for a part of the inner sums of (4) corresponding to a fixed i: where in the last step the sum over k = 1−i, . . . , N −i has been replaced with a sum over k = − /h , . . . /h , thus introducing a cutoff error for those locations x i that are within the distance from the interval ends. Summation over i = 1, . . . , N approximates (45) is changed into a weighted sum of Ψ (ku x (x i )) 2 for k = 1, . . . , N − 1, which still amounts to a decreasing function W (u 2 x ) that is convex in u x . Qualitatively, W then behaves the same way as before.
Derivation of the Barrier Function B. Collecting the summands of (4) that were not used in (45), we have, again for fixed i, and thus after summation over i analogous to the previous step Ω B(u) dx with B(u) ≈ DΨ (4u 2 ).
Similar derivations can be made for patches of 2D images. A point worth noticing is that the barrier function B is bounded. This differs from usual continuous models where such barrier functions tend to infinity at the interval boundaries. However, for each given sampling grid and patch size the barrier function is just strong enough to prevent W from pushing the values out of the interval.

Explicit Time Discretisation
Up to this point we have established a theory for the time-continuous evolution of particle positions. In order to be able to employ our model in simulations and applications we need to discretise (8) in time. Subsequently, we provide a simple yet powerful discretisation which preserves all important properties of the time-continuous model. An approximation of the time derivative in (8) by forward differences yields the explicit scheme for i = 1, . . . , N , where τ denotes the time step size and an upper index k refers to the time kτ . In the following, we derive necessary conditions for which the explicit scheme preserves the position range (0, 1) and the position ordering. Furthermore, we show convergence of (47) in dependence of τ .

Theorem 9 (Avoidance of Range Interval Boundaries of the Explicit Scheme).
Let L Φ be the Lipschitz constant of Φ restricted to the interval (0, 2). Moreover, let 0 < v k i < 1, for every 1 ≤ i ≤ N , and assume that the time step size τ of the explicit scheme (47) satisfies Then it follows that 0 < v k+1 i < 1 for every 1 ≤ i ≤ N .
Proof. In accordance with (21) the explicit scheme (47) can be written as v k+1 where i = 1, . . . , N . Now assume that 0 < v k i , v k j < 1 and let us examine the contribution of the two summation terms in (49). We need to distinguish the following five cases: 2. If 1 2 < v k i = v k j then 2v k i ∈ (1, 2). Thus, using Φ(1) = 0, 5. Finally, if v k j < v k i and 1 2 < v k i , using the periodicity of Φ we get Combining (47) with (48) and (50) Theorem 10 (Rank-Order Preservation of the Explicit Scheme). Let L Φ be the Lipschitz constant of Φ restricted to the interval (0, 2). Furthermore, let v 0 i , for i = 1, . . . , N , denote the initially distinct positions in (0, 1) and -in accordance with Theorem 4 -let the weight matrixW have constant columns, i.e.w j, =w i, for 1 ≤ i, j, ≤ N . Moreover, let 0 < v k i < v k j < 1 and assume that the time step size τ used in the explicit scheme (47) satisfies Proof. For distinct positions, (47) reads for i = 1, . . . , N . Considering this explicit discretisation for ∂ t v i and ∂ t v j we obtain for i, j ∈ {1, 2, . . . , N }: Now remember that v k i < v k j by assumption and that -as a consequence - Using the fact thatw j,k =w i,k for 1 ≤ i, j, k ≤ N and that Φ is Lipschitz in (0, 2), we also know that Let the time step size τ fulfil (57). Then we can write In combination with T 1 , T 2 ≥ 0, it follows that and we immediately know that v k j − v k i − T 1 + T 2 > 0. Together with (59) and (60) Theorem 11 (Convergence of the Explicit Scheme). Let (6) be a twice continuously differentiable convex function. Then the explicit scheme (47) converges for time step sizes where L Φ denotes the Lipschitz constant of Φ restricted to the interval (0, 2) and L refers to the Lipschitz constant of the gradient of (6).
Proof. Convergence of the gradient method to the global minimum of E(v,W ) is well-known for continuously differentiable convex functions with Lipschitz continuous gradient and time step sizes 0 < τ < 2/L (see e.g. [33, Theorem 2.1.14]). A valid Lipschitz constant is given by L max as defined in (18). Consequently, the time step sizes τ need to fulfil (65) in order to ensure convergence of (47). The smaller or equal relation results from the strictness in (18).

Remark 3 (Optimal Time
Step Size). The optimal time step size, i.e. the value of τ which leads to most rapid descent, is given by τ = 1/L (see e.g. [33, 2.1.5]). Thus, we suggest to use τ = 1/L max .

Application to Image Enhancement
Now that we have presented a stable and convergent numerical scheme, we apply (47) to enhance the contrast of digital greyscale and colour images. Throughout all experiments we use Ψ = Ψ 1,1 (cf. Table 1 and Figure 2).

Greyscale Images
The application of the proposed model to greyscale images follows the ideas presented in [5]. We define a digital greyscale image as a mapping f : {1, . . . , n x } × {1, . . . , n y } → [0, 1]. Note that all grey values are mapped to the interval (0, 1) to ensure the validity of our model before processing. The grid position of the i-th image pixel is given by the vector x i whereas v i denotes the corresponding grey value. Subsequently, we will see that a well-considered choice of the weighting matrixW allows either to enhance the global or the local contrast of a given image.

Global Contrast Enhancement.
For global contrast enhancement we make use of the global model as discussed in Section 3.2. Only the N different occurring grey values v i -and not their positions in the image -are considered. We let every entrỹ w i,j of the weighting matrix denote the frequency of grey value v j in the image.
Assuming an 8-bit greyscale image this leads to a weighting matrix of size 256 × 256 which is independent of the image size. As illustrated in Figure 5, global contrast enhancement can be achieved in two ways: As a first option one can use the explicit scheme (47) to describe the evolution of all grey values v i up to some time t (see column two of Figure 5). The amount of contrast enhancement grows with increasing values of t. In our experiments an image size of 481 × 321 pixels and the application Original image t = 2 · 10 −6 Steady state (43) of the flux function Φ 1,1 with L Φ = 1 imply an upper bound of 1/(2 · 481 · 321) for τ . Thus, we can achieve the time t = 2 · 10 −6 in Figure 5 in a single iteration. If one is only interested in an enhanced version of the original image with maximum global contrast there is an alternative, namely the derived steady state solution for linear flux functions (43). The results are shown in the last column of Figure 5. This figure also confirms that the solution of the explicit scheme (47) converges to the steadystate solution (43) for t → ∞. From (43) it is clear that this steady state is equivalent to histogram equalisation. In summary, this means that the application of our global model to greyscale images offers an evolution equation histogram equalisation which allows to control the amount of contrast enhancement in an intuitive way through the time parameter t.
Local Contrast Enhancement. In order to achieve local contrast enhancement we use our model to describe the evolution of grey values v i at all n x · n y image grid positions. The change of every grey value v i depends on all grey values within a disk-shaped neighbourhood of radius around its grid position x i . We assume that where we weight the spatial distance |x j − x i | by a function γ : The choice of γ is application dependent. However, it usually makes sense to define γ(x) as a non-increasing function in x. Possible choices are e.g.
which are both sketched in Figure 6. When applying this local model to images we make use of mirroring boundary conditions in order to avoid artefacts at the image boundaries. Figure 7 provides an example for local contrast enhancement of digital greyscale images. Again, we describe the grey value evolution with the explicit scheme (47). Furthermore, we use γ 1 to model the influence of neighbouring grey values. As is evident from Figure 7, increasing the values for t goes along with enhanced local contrast.

Colour Images
Based on the assumption that our input data is given in sRGB colour space [48] (in the following denoted by RGB) we represent a digital colour image by the mapping f : {1, . . . , n x } × {1, . . . , n y } → [0, 1] 3 . Subsequently, our aim is the contrast enhancement of digital colour images without distorting the colour information. This means that we only want to adapt the luminance but not the chromaticity of a given image. For this purpose, we convert the given image data to YCbCr colour space [44,Section 3.5] since this representation provides a separate luminance channel. Next, we perform contrast enhancement on the luminance channel only. Just as for greyscale images we map all Y-values to the interval (0, 1) to fulfil our model requirements. After enhancing the contrast, we transform the colour information of the image back to RGB colour space. At this point it is important to mention that the colour gamut of the RGB colour space is a subset of the YCbCr colour gamut and during the conversion process of colour coordinates from YCbCr to RGB colour space the so-called colour gamut problem may occur: Colours from the YCbCr colour gamut may lie outside the RGB colour gamut and thus cannot be represented in RGB colour coordinates. Naik and Murthy [32] state that a simple clipping of the values to the bounds creates undesired shift of hue and may lead to colour artefacts. In order to avoid the colour gamut problem we adapt the ideas presented by Nikolova and Steidl [34] which are based on the intensity representation of the HSI colour space [20, Section 6.2.3]. Using the original and enhanced intensities, they define an affine colour mapping and transform the original RGB values. This preserves the hue and results in an enhanced RGB image. It is straightforward to show that their algorithms are valid for any intensityf of typef with c r + c g + c b = 1 and c r , c g , c b ∈ [0, 1], where r, g, and b denote RGB colour coordinates. Thus, they are applicable to the luminance representation of the YCbCr colour space, too, i.e. c r = 0.299, c g = 0.587, c b = 0.114. Tian and Cohen make use of the same idea in [52]. As in [34], our result image is a convex combination of the outcomes of a multiplicative and an additive algorithm (see [34,Algorithm 4 and 5]) with coefficients λ and 1 − λ for λ ∈ [0, 1]. During our experiments we use a fixed value of λ = 0.5 (for details on how to choose λ we refer to [34]). An overview of our strategy for contrast enhancement of digital colour value images is given in Figure  8. Global Contrast Enhancement. Again, we apply the global model from Section 3.2 in order to achieve global contrast enhancement. As mentioned before, we consider the N different occurring Y-values of the YCbCr representation of the input image and denote them by v i (similar to Section 5.1 we neglect their positions in the image). Every entry of the weighting matrixw i,j contains the number of occurences of the value v j in the Y-channel of the image. It becomes clear that the application of our model -in this setting -basically comes down to histogram equalisation of the Y-channel. Figure 9 shows the resulting RGB images after global contrast enhancement. Similar to the greyscale scenario, we can either apply the explicit scheme (47) or -for Φ = Φ a,1 -estimate the steady state solution following (43). For the first case the amount of contrast enhancement grows with the positive time parameter t. The second column of Figure 9 shows the results for Φ = Φ 1,1 given time t. The corresponding steady state solutions are illustrated in the last column of Figure 9.

Input
Local Contrast Enhancement. In a similar manner -and adapting the ideas from Subsection 5.1 -we achieve local contrast enhancement in colour images. For this purpose we describe the evolution of Y-values v i at all n x · n y image grid positions using a disk-shaped neighbourhood of radius around the corresponding grid positions x i . The entries of the weighting matrixW follow (66). In combination with mirrored boundary conditions the explicit scheme (47) allows to increase the Original image t = 5 · 10 −7 Steady state (43) Fig. 9. Global contrast enhancement using Φ = Φ1,1, λ = 0.5, and images from [1].
local contrast of an image with growing t. Figure 10 shows exemplary results for Φ = Φ 1,1 and γ = γ 1 (cf. (68)). Note, how well -in comparison to the global set-up in Figure 9 -the structure of the door gets enhanced while the details of the door knob are preserved. The differences are even larger in the second image: For both the couple in the foreground and the background scenery, contrast increases which implies visibility also for larger times t.

Parameters
In total, our model has up to six parameters: Φ, τ , t, λ, , and γ. During our experiments we have fixed Φ(s) to the linear flux function Φ 1,1 (s) and λ to 0.5. Valid bounds for the time step size τ are given in Theorems 9-11. From the theory in Section 3 and the subsequent experiments on greyscale and colour images it becomes clear that the amount of contrast enhancement grows with the diffusion time. Thus, it remains to discuss the influence of the parameters and γ. We found out that the neighbourhood radius affects the diffusion time and controls the amount of perceived local contrast enhancement, i.e. it steers the localisation of the contrast enhancement process. Whereas small radii lead to high contrast in already small image areas, the size of image sections with high contrast increases with . For sufficiently large values of global histogram equalisation is approximated. Another interesting point is the choice of the weighting function γ. Overall, choosing γ = γ 1 leads to more homogeneous contrast enhancement resulting in smoother perception. For γ = γ 2 the focus always lies on the neighbourhood centre which implies even more enhancement of local structures than in the preceding case. In summary, γ 2 leads to more enhancement which, however, also creates undesired effects in smooth or noisy regions. Thus, we prefer γ 1 over γ 2 . Further experiments which visualise the effect of the parameters can be found in the supplementary material.

Related Work from an Application Perspective
Now that we have demonstrated the applicability of our model to digital images we want to discuss briefly its relation to other existing theories in the context of image processing. As mentioned in Section 5.1, applying the global model -withW = 11 T -is identical to histogram equalisation (a common formulation can e.g. be found in [20]). Furthermore, there exist other closely related histogram specification techniques -such as [45,35,36] -which can have the same steady state. If we compare our evolution with the histogram modification flow introduced by Sapiro and Caselles [45], we see that their flow can also be translated into a combination of repulsion among grey-values and a barrier function. However, in [45] the repulsive force is constant, and the barrier function quadratic. Thus, they cannot be derived from the same kind of interaction between the v i and their reflected counterparts as in our paper.
Referring to Section 5.1, there also exist well-known approaches which aim to enhance the local image contrast such as adaptive histogram equalisation -see [42] and the references therein -or contrast limited adaptive histogram equalisation [55]. The latter technique tries to overcome the over-amplification of noise in mostly homogeneous image regions when using adaptive histogram equalisation. Both approaches share the basic idea with our approach in Section 5.1 and perform histogram equalisation for each pixel, i.e. the mapping function for every pixel is determined using a neighbourhood of predefined size and its corresponding histogram.
Another related research topic is the rich field of colour image enhancement which we broach in Section 5.2. A short review of existing methods -as well as two new ideas -is presented in [3]. Therein, Bassiou and Kotropoulus also mention the colour gamut problem for methods which perform contrast enhancement in a different colour space and transform colour coordinates to RGB afterwards. Of particular interest are the publications by Naik and Murthy [32] and Nikolova and Steidl [34] whose ideas are used in Section 5.2. Both of them suggest -based on an affine colour transform -strategies to overcome the colour gamut problem while avoiding colour artefacts in the resulting image. A recent approach which also makes use of these ideas is presented by Tian and Cohen [52]. Ojo et al. [37] make use of the HSV colour space to avoid the colour gamut problem when enhancing the contrast of colour images. A variational approach for contrast enhancement which tries to approximate the hue of the input image was recently published by Pierre et al. [41].

Conclusions and Outlook
In our paper we have presented a mathematical model which describes pure backward diffusion as gradient descent of strictly convex energies. The underlying evolution makes use of ideas from the area of collective behaviour and -in terms of the latter -our model can be understood as a fully repulsive discrete first order swarm model. Not only it is surprising that our model allows backward diffusion to be formulated as a convex optimisation problem but also that it is sufficient to impose reflecting boundary conditions in the diffusion co-domain in order to guarantee stability. This strategy is contrary to existing approaches which either assume forward or zero diffusion at extrema or add classical fidelity terms to avoid instabilities. Furthermore, discretisation of our model does not require sophisticated numerics. We have proven that a straightforward explicit scheme is sufficient to preserve the stability of the time-continuous evolution. In our experiments, we show that our model can directly be applied to contrast enhancement of digital greyscale and colour images.
We see our contribution mainly as an example of stable modelling of backward parabolic evolutions that create neither theoretical nor numerical problems. We are convinced that this concept has far more widespread applications in inverse problems, image processing, and computer vision. Exploring them will be part of our future research. models. Leif Bergerhoff wishes to thank Antoine Gautier and Peter Ochs for fruitful discussions around the topic of the paper. This project has received funding from the DFG Cluster of Excellence Multimodal Computing and Interaction and from the European Union's Horizon 2020 research and innovation programme (grant agreement No 741215, ERC Advanced Grant INCOVID). This is gratefully acknowledged.