Bilevel Parameter Learning for Higher-Order Total Variation Regularisation Models

We consider a bilevel optimisation approach for parameter learning in higher-order total variation image reconstruction models. Apart from the least squares cost functional, naturally used in bilevel learning, we propose and analyse an alternative cost based on a Huber-regularised TV seminorm. Differentiability properties of the solution operator are verified and a first-order optimality system is derived. Based on the adjoint information, a combined quasi-Newton/semismooth Newton algorithm is proposed for the numerical solution of the bilevel problems. Numerical experiments are carried out to show the suitability of our approach and the improved performance of the new cost functional. Thanks to the bilevel optimisation framework, also a detailed comparison between \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {TGV}^2$$\end{document}TGV2 and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {ICTV}$$\end{document}ICTV is carried out, showing the advantages and shortcomings of both regularisers, depending on the structure of the processed images and their noise level.


Introduction
In this paper we propose a bilevel optimisation approach for parameter learning in higher-order total variation regularisation models for image restoration.The reconstruction of an image from imperfect measurements is essential for all research which relies on the analysis and interpretation of image content.Mathematical image reconstruction approaches aim to maximise the information gain from acquired image data by intelligent modelling and mathematical analysis.
A variational image reconstruction model can be formalised as follows.Given data f which is related to an image (or to certain image information, e.g. a segmented or edge detected image) u through a generic forward operator (or function) K the task is to retrieve u from f .In most realistic situations this retrieval is complicated by the ill-posedness of K as well as random noise in f .A widely accepted method that approximates this ill-posed problem by a well-posed one and counteracts the noise is the method of Tikhonov regularisation.That is, an approximation to the true image is computed as a minimiser of (1.1) α R(u) where R is a regularising energy that models a-priori knowledge about the image u, d(•, •) is a suitable distance function that models the relation of the data f to the unknown u, and α > 0 is a parameter that balances our trust in the forward model against the need of regularisation.The parameter α in particular, depends on the amount of ill-posedness in the operator K and the amount (amplitude) of the noise present in f .A key issue in imaging inverse problems is the correct choice of α, image priors (regularisation functionals R), fidelity terms d and (if applicable) the choice of what to measure (the linear or nonlinear operator K).Depending on this choice, different reconstruction results are obtained.While functional modelling (1.1) constitutes a mathematically rigorous and physical way of setting up the reconstruction of an image -providing reconstruction guarantees in terms of error and stability estimates -it is limited with respect to its adaptivity for real data.On the other hand, data-based modelling of reconstruction approaches is set up to produce results which are optimal with respect to the given data.However, in general it neither offers insights into the structural properties of the model nor provides comprehensible reconstruction guarantees.Indeed, we believe that for the development of reliable, comprehensible and at the same time effective models (1.1) it is essential to aim for a unified approach that seeks tailor-made regularisation and data models by combining model-and data-based approaches.
To do so we focus on a bilevel optimisation strategy for finding an optimal setup of variational regularisation models (1.1).That is, for a given training pair of noisy and original clean images (f, f 0 ), respectively, we consider a learning problem of the form where F is a generic cost functional that measures the fitness of u * to the original image f 0 .The argument of the minimisation problem will depend on the specific setup (i.e. the degrees of freedom) in the constraint problem (1.1).In particular, we propose a bilevel optimisation approach for learning optimal parameters in higher-order total variation regularisation models for image reconstruction in which the arguments of the optimisation constitute parameters in front of the first-and higher-order regularisation terms.Rather than working on the discrete problem, as is done in standard parameter learning and model optimisation methods, we optimise the regularisation models in infinite dimensional function space.We will explain this approach in more detail in the next section.Before, let us give an account to the state of the art of bilevel optimisation for model learning.In machine learning bilevel optimisation is well established.It is a semi-supervised learning method that optimally adapts itself to a given dataset of measurements and desirable solutions.In [34,18,14], for instance the authors consider bilevel optimization for finite dimensional Markov random field models.In inverse problems the optimal inversion and experimental acquisition setup is discussed in the context of optimal model design in works by Haber, Horesh and Tenorio [20,21], as well as Ghattas et al. [8,3].Recently parameter learning in the context of functional variational regularisation models (1.1) also entered the image processing community with works by the authors [16,9], Kunisch, Pock and co-workers [26,13], Chung et al. [15] and Hintermüller et al. [24].
Apart from the work of the authors [16,9], all approaches so far are formulated and optimised in the discrete setting.Our subsequent modelling, analysis and optimisation will be carried out in function space rather than on a discretisation of (1.1).While digitally acquired image data is of course discrete, the aim of high resolution image reconstruction and processing is always to compute an image that is close to the real (analogue, infinite dimensional) world.Hence, it makes sense to seek images which have certain properties in an infinite dimensional function space.That is, we aim for a processing method that accentuates and preserves qualitative properties in images independent of the resolution of the image itself [36].Moreover, optimisation methods conceived in function space potentially result in numerical iterative schemes which are resolution and mesh-independent upon discretisation [23].
Higher-order total variation regularisation has been introduced as an extension of the standard total variation regulariser in image processing.As the Total Variation (TV) [32] and many more contributions in the image processing community have proven, a non-smooth first-order regularisation procedure results in a nonlinear smoothing of the image, smoothing more in homogeneous areas of the image domain and preserving characteristic structures such as edges.In particular, the TV regulariser is tuned towards the preservation of edges and performs very well if the reconstructed image is piecewise constant.The drawback of such a regularisation procedure becomes apparent as soon as images or signals (in 1D) are considered which do not only consist of constant regions and jumps, but also possess more complicated, higher-order structures, e.g.piecewise linear parts.The artefact introduced by TV regularisation in this case is called staircasing [31].One possibility to counteract such artefacts is the introduction of higher-order derivatives in the image regularisation.Chambolle and Lions [10], for instance, propose a higher order method by means of an infimal convolution of the TV and the TV of the image gradient called Infimal-Convolution Total Variation (ICTV) model.Other approaches to combine first and second order regularisation originate, for instance, from Chan, Marquina, and Mulet [11] who consider total variation minimisation together with weighted versions of the Laplacian, the Euler-elastica functional [29,12] which combines total variation regularization with curvature penalisation, and many more [27,30] just to name a few.Recently Bredies et al. have proposed Total Generalized Variation (TGV) [4] as a higher-order variant of TV regularisation.
In this work we mainly concentrate on two second-order total variation models: the recently proposed TGV [4] and the ICTV model of Chambolle and Lions [10].We focus on second-order TV regularisation only since this is the one which seems to be most relevant in imaging applications [25,5].For Ω ⊂ R 2 open and bounded and u ∈ BV (Ω), the ICTV regulariser reads On the other hand, second-order TGV [7,6] for u ∈ BV (Ω) reads Here BD(Ω) : ∞} is the space of vector fields of bounded deformation on Ω, E denotes the symmetrised gradient and Sym 2 (R 2 ) the space of symmetric tensors of order 2 with arguments in R 2 .The parameters α, β are fixed positive parameters and will constitute the arguments in the special learning problem á la (1.2) we consider in this paper.The main difference between (1.3) and (1.4) is that we do not generally have that w = ∇v for any function v.That results in some qualitative differences of ICTV and TGV regularisation, compare for instance [1].Substituting αR(u) in (1.1) by TGV 2 α,β (u) or ICTV α,β (u) gives the TGV image reconstruction model and the ICTV image reconstruction model, respectively.In this paper we only consider the case K = Id identity and d(u, f ) = u − f 2 L 2 (Ω) in (1.1) which corresponds to an image de-noising model for removing Gaussian noise.With our choice of regulariser the former scalar α in (1.1) has been replaced by a vector (α, β) of two parameters in (1.4) and (1.3).The choice of the entries in this vector now do not only determine the overall strength of the regularisation (depending on the properties of K and the noise level) but those parameters also balance between the different orders of regularity of the function u, and their choice is indeed crucial for the image reconstruction result.Large β will give regularised solutions that are close to TV regularised reconstructions, compare Figure 1.Large α will result in TV 2 type solutions, that is solutions that are regularised with TV of the gradient [22,30], compare Figure 2.With our approach described in the next section we propose a learning approach for choosing those parameters optimally, in particular optimally for particular types of images.For the existence analysis of an optimal solution as well as for the derivation of an optimality system for the corresponding learning problem (1.2) we will consider a smoothed version of the constraint problem (1.1) -which is the one in fact used in the numerics.That is, we replace R(u) -being TV, TGV or ICTV in this paper -by a Huber regularised version and add an H 1 regularisation with a small weight to (1.1).In this setting and under the special assumption of box constraints on α and β we provide a simple existence proof for an optimal solution.A more general existence result that holds also for the original non-smooth problem and does not require box constraints is derived in [17] and we refer the reader to this paper for a more sophisticated analysis on the structure of solutions.
A main challenge in the setup of such a learning approach is to decide what is the best way to measure fitness (optimality) of the model.In our setting this amounts to choosing an appropriate distance F in (1.2) that measures the fitness of reconstructed images to the 'perfect', noise-free images in an appropriate training set.We have to formalise what we mean by an optimal reconstruction model.Classically, the difference between the original, noise-free image f 0 and its regularised version u α,β is computed with an L 2 2 cost functional which is closely related to the PSNR quality measure.Apart from this, we propose in this paper an alternative cost functional based on a Huberised total variation cost where the Huber regularisation | • | γ will be defined later on in Definition 2.1.We will see that the choice of this cost functional is indeed crucial for the qualitative properties of the reconstructed image.The proposed bilevel approach has an important indirect consequence: It establishes a basis for the comparison of the different total variation regularisers employed in image denoising tasks.In the last part of the paper we exhaustively compare the performance of TV, TGV 2 and ICTV for various image datasets.The parameters are chosen optimally, according to the proposed bilevel approach, and different quality measures (like PSNR and SSIM) are considered for the comparison.The obtained results are enlightening about when to use each one of the considered regularisers.In particular, ICTV appears to behave better for images with arbitrary structure and moderate noise levels, whereas TGV 2 behaves better for images with large smooth areas.
Outline of the paper In Section 2 we state the bilevel learning problem for the two higher-order total variation regularisation models, TGV and ICTV, and prove existence of an optimal parameter pair α, β.The bilevel optimization problem is analysed in Section 3, where existence of Lagrange multipliers is proved and an optimality system, as well as a gradient formula, are derived.Based on the optimality condition, a BFGS algorithm for the bilevel learning problem is devised in Section 5.1.For the numerical solution of each denoising problem an infeasible semi-smooth Newton method is considered.Finally, we discuss the performance of the parameter learning method by means of several examples for the denoising of natural photographs in Section 5. Therein, we also present a statistical analysis on how TV, ICTV and TGV regularisation compare in terms of returned image quality, carried out on 200 images from the Berkeley segmentation dataset BSDS300.

Problem statement and existence analysis
We strive to develop a parameter learning method for higher-order total variation regularisation models that maximises the fit of the reconstructed images to training images simulated for an application at hand.For a given noisy image f ∈ L 2 (Ω), Ω ⊂ R 2 open and bounded, we consider where, α, β ∈ R. We focus on TGV 2 and ICTV image denoising: and (1.3) with spatial dependence for u ∈ BV (Ω).For this model, we want to determine the optimal choice of α, β, given a particular type of images and a fixed noise level.More precisely, we consider a training pair (f, f 0 ), where f is a noisy image corrupted by normally distributed noise with a fixed variation, and the image f 0 represents the ground truth or an image that approximates the ground truth within a desirable tolerance.Then, we determine the optimal choice of α, β by solving the following problem where F equals the L 2 2 cost (1.5) or the Huberised TV cost (1.6) and u α,β for a given f solves a regularised version of the minimization problem (2.1) that will be specified in the next section, compare problem (2.3b).This regularisation of the problem is a technical requirement for solving the bilevel problem that will be discussed in the sequel.In contrast to learning α, β in (2.1) in finite dimensional parameter spaces (as is the case in machine learning) we aim for novel optimisation techniques in infinite dimensional function spaces.

Formal statement.
Let Ω ⊂ R n be an open bounded domain with Lipschitz boundary.This will be our image domain.Usually Ω = (0, w) × (0, h) for w and h the width and height of a two-dimensional image, although no such assumptions are made in this work.Our data f and f 0 are assumed to lie in L 2 (Ω).
In our learning problem, we look for parameters (α, β) that for some cost functional Here J γ,µ (•; α, β) is the regularised denoising functional that amends the regularisation term in (2.1) by a Huber regularised version of it with parameter γ > 0, and an elliptic regularisation term with parameter µ > 0. In the case of TGV 2 the modified regularisation term R γ,µ α,β (u) then reads for u ∈ H 1 (Ω) TGV 2,γ,µ α,β (u) := min and in the case of ICTV we have Here, H 1 (Ω) = H 1 (Ω; R n ) and the Huber regularisation | • | γ is defined as follows.
Definition 2.1.Given γ ∈ (0, ∞], we define for the norm • 2 on R m , the Huber regularisation For the cost functional F , given noise-free data f 0 ∈ L 2 (Ω) and a regularised solution u ∈ H 1 (Ω), we consider in particular the L 2 cost as well as the Huberised total variation cost with noise-free data f 0 ∈ BV(Ω).

2.2.
Existence of an optimal solution.The existence of an optimal solution for the learning problem (2.3) is a special case of the class of bilevel problems considered in [17], where existence of optimal parameters in (0, +∞] 2N is proven.For convenience, we provide a simplified proof for the case where box constraints on the parameters are imposed.We start with an auxiliary lower semicontinuity result for the Huber regularised functionals.
Recall that for g ∈ R m , the Huber-regularised norm may be written in dual form as Therefore, we find that Taking a supremising sequence {ϕ j } ∞ j=1 for this functional at any point u, we easily see lower semicontinuity by considering the sequences { u i , ϕ j − G * (ϕ j )} ∞ i=1 for each j.
Our main existence result is the following.
Theorem 2.1.We consider the learning problem (2.3) for TGV 2 and ICTV regularisation, optimising over parameters (α, β) such that 0 ≤ α ≤ ᾱ, 0 ≤ β ≤ β.Here (ᾱ, β) < ∞ is an arbitrary but fixed vector in R 2 that defines a box constraint on the parameter space.Then, there exists an optimal solution (α, β) ∈ R 2 for this problem for both choices of cost functionals, a minimising sequence.Due to the box constraints we have that the sequence (α n , β n ) is bounded in R 2 .Moreover, we get for the corresponding sequences of states u n := u (αn,βn) that in particular this holds for u = 0. Hence, Exemplarily, we consider here the case for the TGV regulariser, that is R γ,µ αn,βn = TGV 2,γ,µ α,β .The proof for the ICTV regulariser can be done in a similar fashion.Inequality (2.4) in particular gives where w n is the optimal w for u n .This gives that (u n , w n ) is uniformly bounded in H 1 (Ω)×H 1 (Ω) and that there exists a subsequence Using the continuity of the L 2 fidelity term with respect to strong convergence in L 2 , and the weak lower semicontinuity of the H 1 term with respect to weak convergence in H 1 and of the Huber regularised functional even with respect to weak * convergence in M (cf.Lemma 2.1) we get where in the last step we have used the boundedness of the sequence R γ,µ αn,βn (u n ) from (2.4) and the convergence of (α n , β n ) in R 2 .This shows that the limit point û is an optimal solution for ( α, β).Moreover, due to the weak lower semicontinuity of the cost functional F and the fact that the set {(α, β) : 0 ≤ α ≤ ᾱ, 0 ≤ β ≤ β} is closed, we have that (α, β, û) is optimal for (2.3).
• Using the existence result in [17], in principle we could allow infinite values for α and β.This would include both TV 2 and TV as possible optimal regularisers in our learning problem.
• In [17], in the case of the L 2 cost and assuming that R γ α,β (f ) > R γ α,β (f 0 ), we moreover show that the parameters (α, β) are strictly larger than 0. In the case of the Huberised TV cost this can only be proven in a discretised setting.Please see [17] for details.
• The existence of solutions with µ = 0, that is without elliptic regularisation, is also proven in [17].Note that here, we focus on the µ > 0 case since the elliptic regularity is required for proving the existence of Lagrange multipliers in the next section.

Lagrange multipliers
In this section we prove the existence of Lagrange multipliers for the learning problem (2.3) and derive an optimality system that characterizes its solution.Moreover, a gradient formula for the reduced cost functional is obtained, which plays an important role in the development of fast solution algorithms for the learning problems (see Section 5.1).
In what follows all proofs are presented for the TGV 2 regularisation case, that is R γ α,β = TGV 2,γ α,β .However, possible modifications to cope with the ICTV model will also be commented.
We start by investigating the differentiability of the solution operator.
3.1.Differentiability of the solution operator.We recall that the TGV 2 denoising problem is given by Using an elliptic regularization we then get u = arg min A necessary and sufficient optimality condition for the latter is then given by the following variational equation Theorem 3.1.The solution operator S : R 2 → U , which assigns to each pair (α, β) ∈ R 2 the corresponding solution to the denoising problem (3.1), is Fréchet differentiable and its derivative is characterized by the unique solution z = S (α, β)[θ 1 , θ 2 ] ∈ U of the following linearized equation: Proof.Thanks to the ellipticity of a(•, •) and the monotonicity of h γ , existence of a unique solution to the linearized equation follows from the Lax-Milgram theorem.
Let ξ := u + − u − z, where u = S(α, β) and u + = S(α + θ 1 , β + θ 2 ).Our aim is to prove that ξ U = o(|θ|).Combining the equations for u + , u and z we get that a(ξ, Ψ) + Adding and subtracting the terms Testing with Ψ = ξ and using the monotonicity of h γ (•) we get that for some generic constant C > 0. Considering the differentiability and Lipschitz continuity of h γ (•), it then follows that where • 1,p stands for the norm in the space W 1,p (Ω).From regularity results for second order systems (see [19,Thm. 1,Rem. 14]), it follows that Inserting the latter in estimate (3.3), we finally get that Remark 3.1.The Fréchet differentiability proof makes use of the quasilinear structure of the TGV 2 variational form, making it difficult to extend to the ICTV model without further regularisation terms.For the latter, however, a Gateaux differentiability result may be obtained using the same proof technique as in [16].
3.2.The adjoint equation.Next, we use the Lagrangian formalism for deriving the adjoint equations for both the TGV 2 and ICTV learning problems.Existence of a solution to the adjoint equation then follows from the well-posedness of the linearized equation.
Defining the Lagrangian associated to TGV 2 learning problem by: and taking the derivative with respect to the state variable (v, w), we get the necessary optimality condition Existence of a unique solution then follows from the transposition method, since the linearised equation is well-posed.Remark 3.2.For the ICTV model it is possible to proceed formally with the Lagrangian approach.We recall that a necessary and sufficient optimality condition for the ICTV functional is given by and the correspondent Lagrangian functional L is given by Deriving the Lagrangian with respect to the state variable (u, v) and setting it equal to zero yields By taking succesively δ v = 0 and δ u = 0, the following system is obtained 3.3.Optimality condition.Using the differentiability of the solution operator and the well-posedness of the adjoint equation, we derive next an optimality system for the characterization of local minima of the bilevel learning problem.Besides the optimality condition itself, a gradient formula arises as byproduct, which is of importance in the design of solution algorithms for the learning problems.
+ be a local optimal solution for problem (2.3).Then there exist Lagrange multipliers Π ∈ U and λ 1 , λ 2 ∈ L 2 (Ω) such that the following system holds: Proof.Consider the reduced cost functional F(α, β) = F (u(α, β)).The bilevel optimization problem can then be formulated as min where F : R 2 → R and C corresponds to the positive orthant in R 2 .From [38, Thm.3.1], there exist multipliers λ 1 , λ 2 such that By taking the derivative with respect to (α, β) and denoting by u the solution to the linearized equation (3.2) we get, together with the adjoint equation (3.6b), that which, taking into account the linearized equation, yields Altogether we proved the result.
Remark 3.3.From the existence result (see Remark 2.1), we actually know that, under some assumptions, ᾱ and β are strictly greater than zero.This implies that the multipliers λ 1 = λ 2 = 0 and the problem is of unconstrained nature.This plays an important role in the design of solution algorithms, since only a mild treatment of the constraints has to be taken into account, as will be showed in Section 6.

Numerical algorithms
In this section we propose a second order quasi-Newton method for the solution of the learning problem with scalar regularisation parameters.The algorithm is based on a BFGS update, preserving the positivity of the iterates through the line search strategy and updating the matrix cyclically depending on the satisfaction of the curvature condition.For the solution of the lower level problem, a semismooth Newton method with a properly modified Jacobi matrix is considered.Moreover, warm initialisation strategies have to be taken into account in order to get convergence for the TGV 2 problem.The developed algorithm is also extended to a simple linear polynomial case.4.1.BFGS algorithm.Thanks to the gradient characterization obtained in Theorem 3.2, we next devise a BFGS algorithm to solve the bilevel learning problems.We employ a few technical tricks to ensure convergence of the classical method.In particular, for numerical stability we need to avoid the boundary of the constraint set on the parameters, so we pick 0 < θ < Θ, considered numerically almost zero or infinity, respectively, and require the box constraints (4.1) θ ≤ α, β ≤ Θ.
We also limit the step length to get at most a fraction closer to the boundary.As we show in [17] the solution is in the interior for the regularisation and cost functionals we are interested in.Below this limit, we use Armijo line search.Moreover, the good behaviour of the BFGS method depends upon the BFGS matrix staying positive definite.This would be ensured by the Wolfe conditions, but because of our step length limitation, the curvature condition is not necessarily satisfied.(The Wolfe conditions are guaranteed to be satisfied for some step length σ, if our domain is unbounded, but the range where the step satisfies the criterion, may be beyond our maximum step length, and is not necessarily satisfied closer to the current point.)Instead we skip the BFGS update if the curvature is negative.
Overall our learning algorithm may be written as follows.
Step (4) ensures that the iterates remain feasible, without making use of a projection step.This is justified since it's been analytically proved that the optimal parameters are greater than zero (see [17]).

4.2.
An infeasible semi-smooth Newton method.In variational form, the TGV 2 denoising problem can be written as or, in general abstract primal-dual form, as where L ∈ L(H 1 (Ω; R m ), H 1 (Ω; R m ) ) is a second order linear elliptic operator, A j , j = 1, . . ., N , are linear operators acting on u and q j (x), j = 1, . . ., N , correspond to the dual multipliers.
Let us set Let us also define the diagonal application D(u) : We may derive ∇ u [D(m j (u))q j ] being defined by A * j q j = f in Ω D(m j (u))q j − α j A j u = 0, a.e. in Ω, (j = 1, . . ., N ).
Remark 4.1.The system (SSN-1) can be simplified, which is crucial to obtain acceptable performance with TGV 2 .Indeed observe that B is invertible, so we may solve δu from Thus we may simplify δu out of (SSN-1), and only solve for δq 1 , . . ., δq N using a reduced system matrix.Finally we calculate δu from (4.3).
For the denoising sub-problem (2.3b) we use the method (SSN-1)-(SSN-3) with the reduced system matrix of Remark 4.1.Here, we denote by z in the case of TGV 2 the parameters z = (v, w), and in the case of ICTV z = (u, v).For the calculation of the step length τ , we use Armijo line search with parameter c = 1e −4 .We end the SSN iterations when where δy i = (δz i , δq i 1 , . . ., δq i N ), and y i = (z i , q i 1 , . . ., q i N ).4.3.Warm initialisation.In our numerical experimentation we generally found Algorithm 4.1 to perform well for learning the regularisation parameter for TV denoising as was done in [16].For learning the two (or even more) regularisation parameters for TGV 2 denoising, we found that a warm initialisation is needed to obtain convergence.More specifically, we use TV as an aid for discovering both the initial iterate (α 0 , β 0 ) as well as the initial BFGS matrix B 1 .This is outlined in the following algorithm.Algorithm 4.2 (BFGS initialisation for TGV 2 parameter learning).Pick a heuristic factor δ 0 > 0. Then do the following: (1) Solve the corresponding problem for TV using Algorithm 4.1.This yields optimal TV denoising parameter α * TV , as well as the BFGS estimate B TV for ∇ 2 F(α * TV ).(2) Run Algorithm 4.1 for TGV 2 with initialisation (α 0 , β 0 ) := (α * TV δ 0 , α * TV ), and initial BFGS matrix B 1 := diag(B TV δ 0 , B TV ).

Experiments
In this section we present some numerical experiments to verify the theoretical properties of the bilevel learning problems and the efficiency of the proposed solution algorithms.In particular, we exhaustively compare the performance of the new proposed cost functional with respect to well-known quality measures, showing a better behaviour of the new cost for the chosen tested images.The performance of the proposed BFGS algorithm, combined with the semismooth Newton method for the lower level problem, is also examined.5.1.Gaussian denoising.We tested Algorithm 4.1 for TV and Algorithm 4.2 for TGV 2 Gaussian denoising parameter learning on various images.Here we report the results for two images, the parrot image in Figure 4a, and the geometric image in Figure 5.We applied synthetic noise to the original images, such that the PSNR of the parrot image is 24.7, and the PSNR of the geometric image is 24.8.
We have included results for both the L 2 -squared cost functional L 2 2 and the Huberised total variation cost functional L 1 η ∇.The learning results are reported in Table 1 for the parrot images, and Table 2 for the geometric image.The denoising results with the discovered parameters can be found in the aforementioned Figure 4 and Figure 5.We report the resulting optimal parameter values, the cost functional value, PSNR, SSIM [37], as well as the number of iterations taken by the outer BFGS method.
Our first observation is that all approaches successfully learn a denoising parameter that gives a good-quality denoised image.Secondly, we observe that the gradient cost functional L 1 η ∇ performs visually and in terms of SSIM significantly better for TGV 2 parameter learning than the cost functional L 2 2 .In terms of PSNR the roles are reversed, as should be, since the L 2 2 is equivalent to PSNR.This again confirms that PSNR is a poor quality measure for images.For TV there is no significant difference between different cost functionals in terms of visual quality, although the PSNR and SSIM differ.
We also observe that the optimal TGV 2 parameters (α * , β * ) generally satisfy β * /α * ∈ (0.75, 1.5)/ .This confirms the earlier observed heuristic that if ≈ 128, 256 then β ∈ (1, 1.5)α tends to be a good choice.As we can observe from Figure 4 and Figure 5, this optimal TGV 2 parameter choice also avoids the stair-casing effect that can be observed with TV in the results.
In Figure 3, we have plotted by the red star the discovered regularisation parameter (α * , β * ) reported in Figure 4. Studying the location of the red star, we may conclude that Algorithm 4.1 and Algorithm 4.2 manage to find a nearly optimal parameter in very few BFGS iterations.To obtain a statistically significant outlook to the performance of different regularisers and cost functionals, we made use of the Berkeley segmentation dataset BSDS300 [28], displayed in Figure 6.We resized each image to 128 pixels on its shortest edge, and take the 128 × 128 top-left square of the image.
To this data set, we applied pixelwise Gaussian noise of variance σ 2 = 2, 10, and 20.We tested the performance of both cost functionals, L 1 η ∇ and L 2 2 , as well as the TGV 2 , ICTV, and TV regularisers on this dataset, for all noise levels.In the first instance, reported in Figures 7-10 (noise levels σ 2 = 2, 20 only), and Tables 3-5, we applied the proposed bi-level learning model on each image individually, to learn the optimal parameters specifically for that image, and a correponding noisy image for  all of the noise levels separately.For the algorithm, we use the same parametrisation as in Section 5.1.
The figures display the noisy images, and indicate by colour coding the best result as judged by the structural similarity measure SSIM [37], PSNR, and the objective function value (L 1 η ∇ or L 2 2 cost).These criteria are, respectively, the top, middle, and bottom rows of colour-coding squares.Red square indicates that TV performed the best, green square indicates that ICTV performed the best, and blue square indicates that TGV 2 performed the best-this is naturally for the optimal parameters for the corresponding regulariser and cost functional discovered by our algorithms.
In the tables, we report the information in a more concise numerical fashion, indicating the mean, standard deviation, and median for all the different criteria (SSIM, PSNR, and cost functional value), as well as the number of images for which each regulariser performed the best.We recall that SSIM is normalised to [0, 1], with higher value better.Moreover, we perform a statistical 95% one-tailed paired t-test on each of the criteria, and a pair of regularisers, to see whether any pair of regularisers can be ordered.If so, this is indicated in the last row of each of the tables.
Overall, studying the t-test and other data, the ordering of the regularisers appears to be ICTV > TGV 2 > TV.This is rather surprising, as in many specific examples, TGV 2 has been observed to perform better than ICTV, see our Figures 4 and 5, as well as [4,1].Only when the noise is high, appears TGV 2 to come on par with ICTV with the L 1 η ∇ cost functional in Figure 9 and Table 5.
A more detailed study of the results in Figures 7-10 seems to indicate that TGV 2 performs better than ICTV when the image contains large smooth areas, but ICTV generally performs better for more chaotic images.This observation agrees with the results in Figures 4 and 5, as well as [4,1], where the images are of the former type.
One possible reason for the better performance of ICTV could be that TGV 2 has more degrees of freedom-in ICTV we essentially constrain w = ∇v for some function v-and therefore overfits to the noisy data, until the noise level becomes so high that overfitting would become too high for any parameter.To see whether this is true, we with   For the first image of the data set, ICTV does in all of the Figures 7-14 better than TGV 2 , while for the second image, the situation is reversed.We have highlighted these two images for the L 1 η ∇ cost in Figures 15-18, for both noise levels σ = 2 and σ = 20.In the case where ICTV does better, hardly any difference can be observed by the eye, while for second image TGV 2 clearly has less stair-casing in the smooth areas of the image, especially with the noise level σ = 20.
Based on this study, it therefore seems that ICTV is the most reliable regulariser of the ones tested, when the type of image being processed is unknown, and low SSIM, PSNR or L 1 η ∇ cost functional value is desired.But as can be observed for individual images, it can within large smooth areas exhibit artefacts that are avoided by the use of TGV 2 .3-5, we may however observe that for low noise levels σ 2 = 2, 10, and generally for batch learning, L 1 η ∇ attains better (higher) SSIM.Since SSIM better captures [37] the visual quality of images than PSNR, this recommends the use of our novel total variation cost functional L 1 η ∇.Of course, one might attempt to optimise the SSIM.This is however a non-convex functional, which will pose additional numerical challenges avoided by the convex total variation cost.

Conclusion and Outlook
In this paper we propose a bilevel optimisation method in function space for learning the optimal choice of parameters in higher-order total variation regularisation.We present a rigorous analysis of this optimisation problem as well as a numerical discussion in the context of image denoising.In particular, we make use of the bilevel learning approach to compare the performance -in terms of returned image quality -of TV, ICTV and TGV regularisation.A statistical analysis, carried out on a dataset of 200 images, suggest that ICTV performs slightly better than TGV, and costs, noise variance σ 2 = 20; BSDS300 dataset, resized.
both perform better than TV, in average.For denoising of images with a high noise level ICTV and TGV score comparably well.For images with large smooth areas TGV performs better than ICTV.Moreover, we propose a new cost functional for the bilevel learning problem, which exhibits interesting theoretical properties and has a better behaviour with respect to the PSNR related L 2 cost used previously in the literature.This study raises the question of other, alternative cost functionals.For instance, one could be tempted to used the SSIM as cost, but its non-convexity might present several analytical and numerical difficulties.The new cost functional, proposed in this paper, turns out to be a good compromise between image quality measure and analytically tractable cost term.

and continue from Step 5 (
ii) Otherwise end the algorithm with solution (α * , β *
Original image (b) Noisy image

Table 1 .
Quantified results for the parrot image ( = 256 = image width/height in pixels)

Table 2 .
Quantified results for the synthetic image ( = 256 = image width/height in pixels)

Table 8 ,
does TGV 2 not lose to ICTV.Another interesting observation is that TV starts to be frequently the best regulariser for individual images, although still statistically does worse than either ICTV or TGV 2 .