Adaptive Image Processing: First Order PDE Constraint Regularizers and a Bilevel Training Scheme

A bilevel training scheme is used to introduce a novel class of regularizers, providing a unified approach to standard regularizers \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$TGV^2$$\end{document}TGV2 and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$NsTGV^2$$\end{document}NsTGV2. Optimal parameters and regularizers are identified, and the existence of a solution for any given set of training imaging data is proved by \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Gamma $$\end{document}Γ-convergence under a conditional uniform bound on the trace constant of the operators and a finite-null-space condition. Some first examples and numerical results are given.


Introduction
Image processing aims at the reconstruction of an original "clean" image starting from a "distorted one", namely from a datum which has been deteriorated or corrupted by noise effects or damaged digital transmission. The key idea of variational formulations in image-processing consists in rephrasing this problem as the minimization of an underlying functional of the form where u η is a given corrupted image, Q := (−1/2, 1/2) N is the N -dimensional unit square (in image processing we usually take N = 2, i.e., Q represents the domain of a square image) and R α is a regularizing functional, with α denoting the intensity parameter (which could be a positive scalar or a vector). Minimizing the functional I allows one to reconstruct a "clean" image based on the functional properties of the regularizer R α .
Within the context of image denoising, for a fixed regularizer R α we seek to identify u α,R := arg min u − u η 2 L 2 (Q) + R α (u) : u ∈ L 2 (Q) .
An example is the ROF model (Rudin et al. 1992), in which the regularizer is taken to be R α (u) := αT V (u), where T V (u) is the total variation of u [see, e.g. Ambrosio et al. (2000, Chapter 4)], α ∈ R + is the tuning parameter, and we have u α,T V := arg min u − u η 2 L 2 (Q) + αT V (u) : u ∈ L 2 (Q) . (1.1) In view of the coercivity of the minimized functional, the natural class of competitors in (1.1) is BV (Q), the space of real-valued functions of bounded variation in Q. The trade-off between the denoising effects of the ROF-functional and its featurepreserving capabilities is encoded by the tuning parameter α ∈ R + . Indeed, high values of α might lead to a strong penalization of the total variation of u, which in turn determines an over-smoothing effect and a resulting loss of information on the internal edges of the reconstructed image, while small values of α cause an unsatisfactory noise removal.
In order to determine the optimal α, sayα, in De Los Reyes et al. (2016Reyes et al. ( , 2017) the authors proposed a bilevel training scheme, which was originally introduced in Machine Learning and later adopted by the imaging processing community (see Chen et al. 2013Chen et al. , 2014Domke 2012;Tappen et al. 2007). The bilevel training scheme is a semi-supervised training scheme that optimally adapts itself to the given "clean data". To be precise, let (u η , u c ) be a pair of given images, where u η represents the corrupted version and u c stands for the original version, or the "clean" image. This training scheme searches for the optimal α so that the recovered image u α,T V , obtained in (1.1), minimizes the L 2 -distance from the clean image u c . An implementation of such training scheme, denoted by (T ), equipped with total variation T V is An important observation is that the geometric properties of the regularizer T V play an essential role in the identification of the reconstructed image u α,T V and may lead to a loss of some fine texture in the image. The choice of a given regularizer R α is indeed a crucial step in the formulation of the denoising problem: on the one hand, the structure of the regularizer must be such that the removal of undesired noise effects is guaranteed, and on the other hand the disruption of essential details of the image must be prevented. For these reasons, various choices of regularizers have been proposed in the literature. For example, the second order total generalized variation, T GV 2 α , defined as has been characterized in Bredies et al. (2010), where Du denotes the distributional gradient of u, (sym∇)v := (∇v + ∇ T v)/2, M b (Q; R N ×N ) is the space of bounded Radon measures in Q with values in R N ×N , α 0 and α 1 are positive tuning parameters, and α := (α 0 , α 1 ). A further possible choice for the regularizer is the non-symmetric counterpart of the T GV 2 α -seminorm defined above, namely the N sT GV 2 α functional (see e.g., Valkonen et al. 2013;Valkonen 2017). The different regularizers have been shown to have several perks and drawbacks for image reconstruction. An important question is thus how to identify the regularizer that might provide the best possible image denoising for a given class of corrupted images.
To address this problem, it is natural to use a straightforward modification of scheme (T ) by inserting different regularizers inside the training level 2 in (T -L1). For example, one could set Level 1.
(R α ) := arg min u α,R − u c 2 L 2 (Q) : R α ∈ αT V , T GV 2 α , N sT GV 2 α , Level 2. u α,R := arg min u − u η 2 L 2 (Q) + R α (u) : u ∈ L 1 (Q) . (1.3) However, the finite number of possible choices for the regularizer within this training scheme would imply that the optimal regularizerR α would simply be determined by performing scheme (T ) finitely many times, at each time with a different regularizer R α . In turn, some possible texture effects for which an "intermediate" (or interpolated) reconstruction between the one provided by, say, T GV 2 α and N sT GV 2 α , might be more accurate, would then be neglected in the optimization procedure. Therefore, one main challenge in the setup of such a training scheme is to give a meaningful interpolation between the regularizers used in (1.3), and also to guarantee that the collection of the corresponding functional spaces exhibits compactness and lower semicontinuity properties.
The aim of this paper is threefold. First, we propose a novel class of imageprocessing operators, the PDE-constrained total generalized variation operators, or PGV 2 α,B , defined as where B is a linear differential operator (see Sect. 2 and Definition 3.5) and α := (α 0 , α 1 ), with α 0 , α 1 ∈ (0, +∞). We also define the space of functions with bounded second order PGV 2 α,B -seminorms Note that if B := sym∇, then the operator PGV 2 α,B defined in (1.4) coincides with the operator T GV 2 α mentioned in (1.2). In fact, we will show that, under appropriate assumptions (see Definition 6.1), the class described in (1.4) provides a unified approach to some of the standard regularizers mentioned in (1.3), generalizing the results in Brinkmann et al. (2019) (see Sect. 7.2). Moreover, the collection of functionals described in (1.4) naturally incorporates the recent PDE-based approach to image denoising formulated in Barbu and Marinoschi (2017) via nonconvex optimal control problem, thus offering a very general and abstract framework to simultaneously describe a variety of different image-processing techniques. Adding to the model higher-order regularizations which can be different from the symmetric gradient additionally allows to enhance image reconstruction in one direction more than in the others, thus paving the way for furthering the study of anisotropic noise-reduction.
The second main goal of this article is the study of a training scheme optimizing the trade-off between effective reconstruction and fine image-detail preservation. That is, we propose a new bilevel training scheme that simultaneously yields the optimal regularizer PGV 2 α,B (u) in the class described in (1.4) and an optimal tuning parameter α, so that the corresponding reconstructed image u α,B , obtained in Level 2 of the (T 2 θ )scheme (see (T 2 θ -L2) below), minimizes the L 2 -distance from the original clean image u c . To be precise, in Sects. 3, 4, and 5 we study the improved training scheme T 2 θ for θ ∈ (0, 1), defined as follows where is an infinite collection of first order linear differential operators B (see Definitions 3.4, 5.1). We prove the existence of optimal solutions to (T 2 θ -L1) by showing that the functional is continuous in the L 1 topology, in the sense of -convergence, with respect to the parameters α and the operators B (see Theorem 4.2). A simplified statement of our main result (see Theorem 5.4) is the following.

The collection
of operators B used in (T 2 θ -L1) has to satisfy several natural regularity and ellipticity assumptions, which are fulfilled by B := ∇ and B := sym∇ (see Sect. 7.2.1). The general requirements on B that allow scheme (T 2 θ ) to have a solution are listed on Assumptions 3.2 and 3.3. Later in Sect. 6, as the third main contribution of this article, we provide in Definition 6.1 a collection of operators B satisfying Assumptions 3.2 and 3.3 under some uniform bounds on the behavior of their traces and under finiteness of their null spaces. A simplified statement of our result is the following (see Theorem 6.5 for the detailed formulation). Theorem 1.2 Let B be a first order differential operator such that there exists a differential operator A for which (A , B) is a training operator pair, namely A admits a fundamental solution having suitable regularity assumptions, and the pair (A , B) fulfills a suitable integration-by-parts formula (see Definition 6.1 for the precise conditions). Then B is such that the training scheme (T 2 θ ) admits a solution. The requirements collected in Definition 6.1 and the analysis in Sect. 6 move from the observation that a fundamental property that the admissible operators B must satisfy is to ensure that the set of maps v ∈ L 1 (Q; R N ) such that Bv is a bounded Radon measure (henceforth denoted by BV B (Q; R N )) must embed compactly in L 1 (Q; R N ). In the case in which B coincides with ∇ or sym∇, a crucial ingredient is Kolmogorov-Riesz compactness theorem (see Brezis 2011, Theorem 4.26 and Proposition 6.6). In particular, for B = sym∇ the key point of the proof is to guarantee that bounded sets F ⊂ B D(Q) satisfy . This in turn relies on the formal computation where φ is a fundamental solution for curlcurl, and where δ and δ h denote the Dirac deltas centered in the origin and in h, respectively. In the case in which B = sym∇ the conclusion then follows from the fact that one can perform an "integration by parts" in the right-hand side of the above formula, and estimate the quantity curlcurl ((δ h * φ − φ) * f ) by means of the total variation of (sym∇) f and owing to the regularity of the fundamental solution of curlcurl. The operator A in Theorem 1.2 plays the role of curlcurl in the case in which sym∇ is replaced by a generic operator B. Definition 6.1 is given in such a way as to guarantee that the above formal argument is rigorously justified for a pair of operators (A , B). Finally, in Sect. 7.2 we give some explicit examples to show that our class of regularizers PGV 2 α,B includes the seminorms T GV 2 α and N sT GV 2 α , as well as smooth interpolations between them. We remark that the task of determining not only the optimal tuning parameter but also the optimal regularizer for given training image data (u η , u c ), has been undertaken in Davoli and Liu (2018) where we have introduced one dimensional real order T GV r regularizers, r ∈ [1, +∞), as well as a bilevel training scheme that simultaneously provides the optimal intensity parameters and order of derivation for one-dimensional signals.
Our analysis is complemented by the very first numerical simulations of the proposed bilevel training scheme. Although this work focuses mainly on the theoretical analysis of the operators PGV 2 α,B and on showing the existence of optimal results for the training scheme (T 2 θ ), in Sect. 7.3 a primal-dual algorithm for solving (T 2 θ -L2) is discussed, and some preliminary numerical examples, such as image denoising, are provided.
With this article we initiate our study of the combination of PDE-constraints and bilevel training schemes in image processing. Future goals will be: • the construction of a finite grid approximation in which the optimal result (α,B) for the training scheme (T 2 θ )can be efficiently determined, with an estimation of the approximation accuracy; • spatially dependent differential operators and multi-layer training schemes. This will allow to specialize the regularization according to the position in the image, providing a more accurate analysis of complex textures and of images alternating areas with finer details with parts having sharpest contours (see also Fonseca and Liu 2017).
This paper is organized as follows: in Sect. 2 we collect some notations and preliminary results. In Sect. 3 we analyze the main properties of the PGV 2 α,B -seminorms. The -convergence result and the bilevel training scheme are the subjects of Sects. 4 and 5, respectively. We point out that the results in Sects. 3 and 4 are direct generalizations of the works in Bredies and Valkonen (2011), Bredies and Holler (1993). The novelty of our approach consists in providing a slightly stronger analysis of the behavior of the functionals in (1.5) by showing not only convergence of minimizers under convergence of parameters and regularizers, but exhibiting also a complete -convergence result.
The expert Reader might skip Sects. 3-5, and proceed directly with the content of Sect. 6. Section 6 is devoted to the analysis of the space BV B for suitable differential operators B. The numerical implementation of some explicit examples is performed in Sect. 7.3.

Notations and Preliminary Results
We collect below some notation that will be adopted in connection with differential operators. Let N ∈ N be given, and let Q := (−1/2, 1/2) N be the unit open cube in R N centered in the origin and with sides parallel to the coordinate axes. M N 3 is the space of real tensors of order N × N × N . Also, D (Q, R N ) and D (Q, R N ×N ) stand for the spaces of distributions with values in R N and R N ×N , respectively, and R N + denotes the set of vectors in R N having positive entries. For every open set U ⊂ R N , the notation B will be used for first order differential operators B : where ∂ ∂ x i denotes the distributional derivative with respect to the i-th variable, and where B i ∈ M N 3 for each i = 1, . . . , N .
Given a sequence {B n } ∞ n=1 of first order differential operators and a first order differential operator B, with coefficients B i n ∞ n=1 and B i , i = 1, . . . , N , respectively, we say that where for B ∈ M N 3 , B stands for its Euclidean norm.

The Space BV B and the Class of Admissible Operators
We generalize the standard total variation seminorm by using first order differential operators B: D (Q; R N ) → D (Q; R N ×N ) in the form (2.1).
Definition 3.1 We define the space of tensor-valued functions BV B (Q; R N ) as and we equip it with the norm We refer to , Raiţȃ and Skorobogatova (2020) for some recents results on BV B -spaces for elliptic and cancelling operators, as well as to Kristensen and Raiţȃ (2022) for a study of associated Young measures. We point out that in the same way in which BV spaces relate to W 1, p -spaces, the spaces BV B are connected to the theory of W 1, p B -spaces, cf. (Gmeineder and Raiţȃ 2019;Gmeineder et al. 2019;Raiţȃ 2018. See also Guerra and Raiţȃ (2020) for a related compensated-compactness study.
In order to introduce the class of admissible operators, we first list some assumptions on the operator B.

(Compactness) The injection of BV
We point out that all requirements above are satisfied for B := ∇.

Assumption 3.3
The following compactness property applies to a collection of opera-

Definition 3.4
We denote by the collection of operators B defined in (2.1), with finite dimensional null-space N (B), and satisfying Assumption 3.2.
In Sect. 6 we will exhibit a subclass of operators B ∈ additionally fulfilling the compactness and closure Assumption 3.3.

The PGV-Total Generalized Variation
We introduce below the definition of the PDE-constrained total generalized variation seminorms.
We note that for all α ∈ R 2 + , the seminorms PGV 2 α,B are topologically equivalent. With a slight abuse of notation, in what follows we will write PGV 2 B instead of PGV 2 α,B whenever the dependence of the seminorm on a specific multi-index α ∈ R 2 + will not be relevant for the presentation of the results. We introduce below the set of functions with bounded P DE-generalized variationseminorms.

Definition 3.6 We define
We next show that the PGV 2 B -seminorm is finite if and only if the T V -seminorm is. The next three propositions show some basic properties of the PGV 2 regularizers. The expert Reader might skip their proof and proceed directly to Sect. 4. Proposition 3.7 Let u ∈ L 1 (Q) and recall P GV 2 B (u) from Definition 3.5. Then, It suffices to observe that We prove that the infimum problem in the right-hand side of (3.3) has a solution.
Proof Let u ∈ BV (Q) and, without loss of generality, assume that α = (1, 1). In view of Proposition 3.7 we have PGV 2 for every n ∈ N. In view of Assumption 3.2, and together with (3.5) and (3.6), we The minimality of v follows by lower-semicontinuity.
We close this section by studying the asymptotic behavior of the PGV 2 B seminorms in terms of the operator B for subclasses of satisfying Assumption 3.3.
We now claim that (3.10) In view of the density result in Assumption 3.2, Statement 2, we may assume that v ∈ C ∞ (Q; R N ) and, for ε > 0 small, where in the last inequality we used (3.11). Claim (3.10) is now asserted by the arbitrariness of ε > 0.

0-Convergence of Functionals Defined by PGV-Total Generalized Variation Seminorms
In this section we prove a -convergence result with respect to the operator B. For Throughout this section, let u η ∈ L 2 (Q) be a given datum representing a corrupted image.
The following theorem is the main result of this section.
Theorem 4.2 Let {B n } ∞ n=1 ⊂ satisfy Assumption 3.3, and let {α n } ∞ n=1 ⊂ R 2 + be such that B n → B in ∞ and α n → α ∈ R 2 + . Then the functionals I α n ,B n satisfy the following compactness properties: Then there exists u ∈ BV (Q) such that, up to the extraction of a subsequence (not relabeled), u n * u weakly * in BV (Q).
Additionally, I α n ,B n -converges to I α,B in the L 1 topology. To be precise, for every u ∈ BV (Q) the following two conditions hold: We subdivide the proof of Theorem 4.2 into two propositions.
For B ∈ , we consider the projection operator Note that this projection operator is well defined owing to the assumption that N (B) is finite dimensional [see Brezis (2011, p. 38, Definition and Example 2) and Breit et al. (2017, Subsection 3.1)].
Next we have an enhanced version of Korn's inequality.
for every n ∈ N. Up to a normalization, we can assume that Thus, by (4.3) we have In view of Assumption 3.3, up to a further subsequence (not relabeled), there exists v ∈ BVB(Q; R N ) such thatṽ n →ṽ strongly in L 1 (Q) and |Bṽ| M b (Q;R N ×N ) = 0. Moreover, in view of (4.5), we also have ṽ L 1 (Q;R N ) = 1.
By the joint continuity of the projection operator, by (4.4) we have with v = PB(ṽ), and hence we must haveṽ = 0, contradicting the fact that ṽ L 1 (Q;R N ) = 1.
The following proposition is instrumental for establishing the liminf inequality. (4.6) Then there exists u ∈ BV (Q) such that, up to the extraction of a subsequence (not relabeled), Proof Fix r > 0 and recall the definition of (B) r from (4.1). We claim that if r is small enough then there exists C r > 0 such that for all u ∈ BV (Q) and B ∈ (B) r .
Indeed, by Definitions 3.5 and 3.6 we always have for all B ∈ and u ∈ BV (Q). The crucial step is to prove that the second inequality in (4.8) holds. Set We claim that there exists C > 0, depending on r , such that for each u ∈ BV (Q) and ω ∈ N r (B) we have (4.9) Suppose that (4.9) fails. Then we find sequences for every n ∈ N. Thus, up to a normalization, we can assume that and which implies that u n → 0 strongly in L 1 (Q) and By (4.10) and (4.11), it follows that |ω n | M b (Q;R N ) is uniformly bounded, and hence, up to a subsequence (not relabeled), there exists Then B n ω n = 0 for all n ∈ N. Since B n − B ∞ < r , in particular the sequence fulfills Assumption 3.3, and hence, upon extracting a further subsequence (not relabeled), there holds ω n → ω 0 strongly in L 1 (Q; R N ).
Additionally, since u n → 0 strongly in L 1 (Q), we infer that Du n → 0 in the sense of distributions. Therefore, by (4.12) we deduce that ω 0 = 0. Using again (4.11), we conclude that which contradicts (4.10). This completes the proof of (4.9).
We are now ready to prove the second inequality in (4.8), i.e., for some constant C r > 0, and for all B ∈ (B) r . Fix B ∈ (B) r , and by Proposition 3.8 let v B satisfy (4.14) Since where in the first inequality we used (4.9), the third inequality follows by (4.2), and in the last equality we invoked (4.14). Defining C r := C + C + 1, we obtain and we conclude (4.13). Now we prove the compactness property. Fix ε > 0. We first observe that, since α n → α ∈ R 2 + , for α n = (α 0 n , α 1 n ), and for n small enough there holds In particular, in view of (4.6) we have Since B n → B in ∞ , choosing r > 0 small enough there exists N > 0 such that B n ⊂ (B) 1 for all n ≥ N . Thus, by (4.8) and (4.16), we infer that sup u n BV (Q) : n ∈ N ≤ C 1 sup u n B PGV 2 B n (Q) : n ∈ N < +∞, and thus we may find u ∈ BV (Q) such that, up to a subsequence (not relabeled), u n * u in BV (Q). Additionally, again from Proposition 3.8, for every n ∈ N there exists v n ∈ BV B n (Q; R N ) such that, By (4.6) and (4.7), and in view of Assumption 3.3, we find v ∈ BV B (Q; R N ) such that, up to a subsequence (not relabeled), v n → v strongly in L 1 . Therefore, we have where in the second to last inequality we used Assumption 3.3 and (4.15). The arbitrariness of ε concludes the proof of the proposition.
Proof This is a direct consequence of Proposition 3.9 by choosing u n := u.
We close Sect. 4 by proving Theorem 4.2.

Proof of Theorem 4.2 Properties (Compactness) and (Liminf inequality) hold in view
of Proposition 4.4, and Property (Recovery sequence) follows from Proposition 4.5.

The Bilevel Training Scheme with PGV-Regularizers
In this section, we introduce a bilevel training scheme associated to our class of regularizers and show its well-posedness. Let u η ∈ L 2 (Q) and u c ∈ BV (Q) be the corrupted and clean images, respectively. In what follows we will refer to pairs (u c , u η ) as training pairs. We recall that was introduced in Definition 3.4.
Definition 5.1 We say that ⊂ is a training set if the operators in satisfy Assumption 3.3, and if is closed and bounded in ∞ .
Examples of training sets are provided in Sect. 7. We introduce the following bilevel training scheme.
Definition 5.2 Let θ ∈ (0, 1) and let be a training set. The two levels of the scheme (T 2 θ )are We first show that the Level 2 problem in (T 2 θ -L2) admits a solution for every given u η ∈ L 2 (Q), and for every α ∈ R 2 + .
1) for every n ∈ N, and let {v n } ⊂ BV B (Q) be the associated sequence of maps provided by Proposition 3.8. In view of (5.1), there exists a constant C such that for every n ∈ N. We claim that sup v n L 1 (Q;R N ) : n ∈ N < +∞. and dividing both sides of (5.2) by v n L 1 (Q) , we deduce that (5.9) Since by (5.8) Dũ n → 0 in the sense of distribution, we deduce from (5.9) that v = 0. This contradicts (5.6), and implies claim (5.3).
By combining (5.2) and (5.3), we obtain the uniform bound for every n ∈ N and some C > 0. Thus, by (5.2) and Assumption 3.2 there exist u B ∈ BV (Q) and v ∈ BV B (Q) such that, up to the extraction of a subsequence (not relabeled), In view of (5.1), and by lower-semicontinuity, we obtain the inequality Theorem 5.4 The training scheme (T 2 θ ) admits at least one solution (α,B) ∈ θ, 1/θ ] 2 × , and provides an associated optimally reconstructed image uα ,B ∈ BV (Q).
Proof By the boundedness and closedness of in ∞ , up to a subsequence (not relabeled), there exists (α,B) ∈ [θ, 1/θ ] 2 × such that α n →α in R 2 and B n → B in ∞ . Therefore, in view of Theorem 4.2 and the Fundamental Theorem ofconvergence (see, e.g. Dal Maso 1993), we have u α n ,B n * uα ,B weakly * in BV (Q) and strongly in L 1 (Q), (5.10) where u α n ,B n and uα ,B are defined in (T 2 θ -L2). By (5.10), we have which completes the proof.

Training Set 6[A ] Based on (A , B) Training Operators Pairs
This section is devoted to providing a class of operators B belonging to (see Definition 3.4), satisfying Assumption 3.3, and being closed with respect to the convergence in (2.2). Recall that Q = − 1 2 , 1 2 N .

A Subcollection of 5 Characterized by (A , B) Training Operators Pairs
Let U be an open set in R N , and let A : D (U ; R N ) → D (U ; R N ) be a d-th order differential operator, defined as where, for every multi-index a = (a 1 , a 2 , . . . , a N ) ∈ N N , is meant in the sense of distributional derivatives, and A a is a linear operator mapping from R N to R N . Let B be a first order differential operator, B : D (U ; R N ) → D (U ; R N ×N ), given by where B i ∈ M N 3 for each i = 1, . . . , N , and where ∂ ∂ x i denotes the distributional derivative with respect to the i-th variable. We will restrict our analysis to elliptic pairs (A , B) satisfying the ellipticity assumptions below. Definition 6.1 We say that (A , B) is a training operator pair if B has finite dimensional null- space N (B), and (A , B) satisfies the following assumptions: 1. For every λ ∈ {−1, 1} N , the operator A has a fundamental solution P λ ∈ L 1 (R N ; R N ) such that: a. A P λ = λδ, where δ denotes the Dirac measure centered at the origin; b. P λ ∈ C ∞ (R N \ {0}; R N ) and ∂ a ∂ x a P λ ∈ L 1 loc (R N ; R N ) for every multi-index a ∈ N N with |a| ≤ d − 1 (where d is the order of the operator A ).

For every open set U ⊂ R N such that Q ⊂ U , and for every u ∈ W d−1,1 (U ; R N )
and Note that, in view of 1b. one has, directly, the following property: for every a ∈ N N with |a| ≤ d − 1, and for every open set U ⊂ R N such that Q ⊂ U , we have Explicit examples of operators A and B satisfying Definition 6.1 are provided in Sect. 7. Condition 2. in Definition 6.1 can be interpreted as an "integration by partsrequirement", as highlighted by the example below. Let N = 2, d = 2, B = ∇, and let U ⊂ R 2 be an open set such that Q ⊂ U . Consider the following second order differential operator Then, for every u ∈ W 2,1 (U ; R 2 ) and v ∈ C ∞ c (U ; R 2 ) there holds for every i = 1, 2. In other words, the pair (A , B) satisfies (6.1) with C A = 1.
Definition 6.2 For every A as in Definition 6.1 we denote by A the following collection of first order differential operators B, A := {B : (A , B) is a training operator pair} .
The following extension result in BV B is a corollary of the properties of the trace operator defined in Breit et al. (2017, Section 4).  Breit et al. (2017, (4.9) and Theorem 1.1) there exists a continuous trace operator tr : By the classical results by E. Gagliardo (see Gagliardo (1957)) there exists a linear and continuous extension operator E : L 1 (∂ Q; R N ) → W 1,1 (R N \Q; R N ). The statement follows by setting where χ Q and χ R N \Q denote the characteristic functions of the sets Q and R N \ Q, respectively, and by Theorem Breit et al. (2017, Corollary 4.21).

Remark 6.4
We point out that, as a direct consequence of Lemma 6.3, we obtain In particular, from (6.4) and Theorem Breit et al. (2017, Corollary 4.21), the constant C B in the inequality above is obtained by the following estimate: where C G is the constant associated to the classical Gagliardo's extension in W 1,1 (see Gagliardo 1957) and is thus independent of B, whereas C T B is the constant associated to the trace operator in BV B . Hence, The main result of this section is the following. Theorem 6.5 Let A be as in Definition 6.1. Let and A be the collections of first order operators introduced in Definitions 3.4 and 6.2, respectively. Then every operator B ∈ A satisfies Assumption 3.2. Additionally, every subset of operators in A for which the constants in (6.5) are uniformly bounded fulfills Assumption 3.3.
We proceed by first recalling two preliminary results from the literature. The next proposition, that may be found in Brezis (2011, Theorem 4.26), will be instrumental in the proof of a regularity result for distributions with bounded B-total-variation (see Proposition 6.9).

Proposition 6.6 Let F be a bounded set in L p
Then, denoting by F Q the collection of the restrictions to Q of the functions in F, the closure of F Q in L p (Q) is compact.
We also recall some basic properties of the space BV B (Q; R N ) Before we establish Theorem 6.5, we prove a technical lemma.
Lemma 6.8 Let k ∈ N. Then there exists a constant C > 0 such that, for every h ∈ R N and w ∈ W k,1 where τ h is the operator defined in (6.3).
Proof By the linearity of τ h , we have On the one hand, by the Sobolev embedding theorem (see, e.g., Leoni 2009), we have . (6.6) On the other hand, by the continuity of the translation operator in L 1 (see, e.g., Brezis (2011, Lemma 4

.3) for a proof in R N , the analogous argument holds on bounded open sets) we have lim sup
The result follows by combining (6.6) and (6.7).
The next proposition shows that operators in A satisfy Assumption 3.2.
Proposition 6.9 Let B ∈ A , and let BV B (Q; R N ) be the space introduced in Definition 3.1. Then the injection of BV B (Q; Proof For every u ∈ BV B (Q; R N ) we still denote by u its extension to BV B (2Q; R N ) provided by Lemma 6.3. In view of Proposition 6.7, for every u ∈ BV B (Q; R N ) we then find a sequence of maps {v n With a slight abuse of notation, we still denote by v n u the C d -extension of the above maps to the whole R N (see e.g. Fefferman 2007), where d is the order of the operator A . Without loss of generality, up to a multiplication by a cut-off function, we can assume that v n u ∈ C d c (3Q; R N ) for every n ∈ N. We first show that, setting where we recall τ h from Theorem 6.6, and where for fixed u ∈ F, v n u is as above and satisfying (6.8).
Let h ∈ R N and let δ h be the Dirac distribution centered at h ∈ R N . By the properties of the fundamental solution P λ we deduce for every i = 1, . . . , N , and every λ ∈ {−1, 1} N . Therefore, we obtain that for every λ ∈ {−1, 1} N , where in the last inequality we used the fact that τ h P λ − P λ ∈ W d−1,d (R N ; R N ) owing to Definition 6.1, the identity τ h ∂ a ∂ x a P λ = ∂ a ∂ x a (τ h P λ ), as well as Definition 6.1, Assertion 2.
We close this subsection by proving a compactness and lower-semicontinuity result for functions with uniformly bounded BV B n norms. We recall that the definition of M A is found in (6.2). Proposition 6.10 Let {B n } ∞ n=1 ⊂ A be such that B n → B in ∞ and the constants C B n in (6.5) are uniformly bounded. For every n ∈ N let v n ∈ BV B n (Q; R N ) be such that sup v n BV B n (Q;R N ) : n ∈ N < +∞. Proof Let v n satisfy (6.11). With a slight abuse of notation we still indicate by v n the BV B n continuous extension of the above maps to R N (see Lemma 6.3). Let φ ∈ C ∞ c (2Q; R N ) be a cut-off function such that φ ≡ 1 on Q, and for every n ∈ N letṽ n be the mapṽ n := φv n . Note that suppṽ n ⊂⊂ 2Q. Additionally, by Lemma 6.3 there holds (6.14) where in the last inequality we used Lemma 6.3, and where the constants C 1 and C 2 depend only on the cut-off function φ. To prove (6.12) we first show that where we recall τ h from Theorem 6.6. Arguing as in the proof of (6.10), by (6.14) we deduce that for |h| small enough, since supp OE ⊂⊂ 2Q, for every n ∈ N. Property (6.15) follows by (6.2). Owing to Proposition 6.6, we deduce (6.12). We now prove (6.13). Let ϕ ∈ C ∞ c (Q; R N ×N ) be such that |ϕ| ≤ 1. Then where in the last step we used the fact that v n → v strongly in L 1 (Q) and B n → B in ∞ . This completes the proof of (6.13) and of the proposition.
Proof of Theorem 6.5 Let B ∈ A be given. The fact that B satisfies Assumption 3.2 follows by Propositions 6.7 and 6.9. The fulfillment of Assumption 3.3 is a direct consequence of Proposition 6.10.

Training Scheme with Fixed and Multiple Operators A
In this subsection we provide a construction of training sets associated to a given differential operator A , namely collection of differential operators B for which our training scheme is well-posed (see Definitions 5.1 and 5.2). We first introduce a collection [A ] for a given operator A of order d ∈ N.
Definition 6.11 Let A be a differential operator of order d ∈ N. We denote byˆ [A ] the setˆ The first result of this subsection is the following.
Theorem 6.12 Let A be a differential operator of order d ∈ N, and assume that [A ] is a non-empty subset ofˆ [A ] which is closed in the ∞ convergence with respect of the property of having finite-dimensional null space. Then the collection [A ] is a training set (see Definition 5.1).

Proof By the definition of [A ]
we just need to show that [A ] is closed in ∞ . Let u ∈ C ∞ (Q; R N ) and {B n } ∞ n=1 ⊂ [A ] be given. Then, up to a subsequence (not relabeled), we may assume that B n → B in ∞ . We claim that B ∈ A .
The fact that N (B) is finite-dimensional follows by definition. To conclude the proof of the theorem we still need to show that (A , B) satisfies Definition 6.1, Asser- (6.16) Integrating by parts we obtain for every i = 1, . . . , N . Taking the limit as n → ∞ first, and then as k → ∞, since B n → B in ∞ and in view of (6.16), we conclude that The proof of the second part of Assertion 2 is analogous. This shows that (A , B) satisfies Definition 6.1 and concludes the proof of the theorem.

Remark 6.13
We note that the result of Theorem 6.12 still holds if we replace the upper bound 1 in Definition 6.11 with an arbitrary positive constant.
We additionally point out that requiring that the finite-dimensional-kernel property is preserved in the limit passage automatically ensures the existence of a lower bound on the ∞ -norms of the operators. In other words, the null operator is not included in our analysis.
As a final remark, we stress that, ifˆ [A ] contains an operatorB with finitedimensional null space, then a training set [A ] ⊂ˆ [A ] being closed in the ∞ -norm with respect to the property of having finite-dimensional null space can be constructed by taking the intersection ofˆ [A ] with a small enough neighborhood ofB in the ∞ -topology. In fact, denoting by B i ∈ M N 3 , i = 1, . . . , N , the coefficients ofB, the symbol of B is defined as The condition of having finite-dimensional null space is equivalent to the so-called C-ellipticity condition, which consists in the injectivity of the map B[ξ ] as a linear map on C N \ {0} for every ξ ∈ C N \{0} [see Breit et al. (2017, Section 2.3)]. By linearity, this, in turn, can be reduced to the condition of the map B[ξ ] being injective on C N \ {0} for every ξ in B C (0, 1) \ {0}, where B C (0, 1) is the unit ball centered in the origin in the complex plane. In particular, it is a stable condition with respect to small ∞ -perturbations of the coefficients.
We now consider the case of multiple operators A . Definition 6.14 We say that collection A of differential operators A is a training set builder if sup {C A : A ∈ A} < +∞ and lim where C A and M A (h) are defined in (6.1) and (7.5), respectively.
We then define the class [A] via where for every A ∈ A, [A ] is the class defined in Definition 6.11.
We close this section by proving the following theorem.
Theorem 6.15 Let A be a training set builder. Then [A] is a training set.
Proof The proof of this theorem follows the argument in the proof of Theorem 6.12 using the fact that the two critical constants M A (h) and C A , in (6.2) and (6.1), respectively, are uniformly bounded due to (6.17).

Explicit Examples and Numerical Observations
In this section we exhibit several explicit examples of operators A and training sets [A ], we provide numerical simulations and we make some observations derived from them.

The Existence of Fundamental Solutions of Operators A
One important requirement in Definition 6.1 is the existence of the fundamental solution P λ ∈ L 1 (R N , R N ) of a given operator A . A result in this direction can be found in Hsiao and Wendland (2008, p. 351, Section 6.3), where an explicit form of the fundamental solution for Agmon-Douglis-Nirenberg elliptic systems with constant coefficients is provided.
Remark 7.1 In the case in which N = 2, A has order 2 and satisfies the assumptions in Hsiao and Wendland (2008, p. 351, Section 6.3), the fundamental solution P λ can be written as where L denotes the fundamental solution of Laplace's equation, R A denotes a constant depending on A , and the integration is taken over the unit circle |η| = 1 with arc length element dω η .
In the special case in which the fundamental solution P α , with A P α = αδ for α ∈ R 2 , is given by We observe that ∇ P α is positively homogeneous of degree −1 (= 1 − N ). Also, since R A in (7.1) is a constant, ∇ P λ must have the same homogeneity as ∇ P α , which is 1 − N .
Proposition 7.2 Let A be a differential operator of order d ∈ N, and assume that its fundamental solution P λ is such that ∂ a ∂ x a P λ is positively homogeneous of degree 1 − N for all multi-indexes a ∈ N N with |a| = d − 1. Then property (6.2) is satisfied.
Proof Let s ∈ (0, 1) be fixed. Since ∂ a ∂ x a P λ is positively homogeneous of degree 1 − N for all multi-indexes a ∈ N N with |a| = d − 1, by Temam (1983, Lemma 1.4) we deduce the estimate for every x ∈ R N , 0 ≤ s ≤ 1, and |h| ≤ 1/2, where the constant C is independent of x and h.

Remark 7.3
As a corollary of Proposition 7.2 and Remark 7.1, we deduce that all operators A satisfying the assumptions in Hsiao and Wendland (2008, p. 351, Section 6.3) comply with Definition 6.1, Assertion 1. In particular, differential operators A which can be written in the form A = B * • C , where B * is the first order differential operator associated to B and having as coefficients the transpose of the matrices B i , i = 1, . . . , N , and where C is a differential operator of order d − 1 having constant coefficients, are such that (A , B) complies with Definition 6.1.

The Unified Approach to TGV 2 and NsTGV 2 : An Example of 6[A ]
In this section we give an explicit construction of an operator A such that the seminorms N sT GV 2 and T GV 2 , as well as a continuum of topologically equivalent seminorms connecting them, can be constructed as operators B ∈ [A ].
We start by recalling the definition of the classical symmetrized gradient, and let B sym (v) be defined as in (2.1) with B 1 sym and B 2 sym as above. Then B sym (v) = Ev for all v ∈ C ∞ (Q; R 2 ), and N (B sym ) is finite dimensional. In particular, The first part of Definition 6.1 follows from Remark 7.3. Next we verify that (6.1) holds. Indeed, choosing A as in (7.2), we first observe that for every w ∈ W 1,2 (Q; R 2 ) and v ∈ C ∞ c (Q; R 2 ). That is, for every open set U ⊂ R N such that Q ⊂ U we have The same computation holds for w ∈ C ∞ c (Q; R 2 ) and v ∈ BV B (Q; R 2 ). This proves that Assertion 2 in Definition 6.1 is also satisfied.
We finally construct an example of a training set [A ]. For every 0 ≤ s, t ≤ 1, we define By a straightforward computation, we obtain that N (B s,t ) is finite dimensional for every 0 ≤ s, t ≤ 1. Additionally, Assertion 1 in Definition 6.1 follows by adapting the arguments in Remark 7.3. Finally, arguing exactly as in (7.7), we obtain that Hence, we deduce again Statement 2 in Definition 6.1. Therefore, the collection [A ] given by is a training set according to Definition 6.11. We remark that [A ] includes the operator T GV 2 (with s = t = 1/2) and the operator N sT GV 2 (with t = 0 and s = 1), as well as a collection of all "interpolating" regularizers. In other words, our training scheme (T 2 θ ) with training set [A ] is able to search for optimal results in a class of operators including the commonly used T GV 2 and N sT GV 2 , as well as any interpolation regularizer.

Comparison with Other Works
In Brinkmann et al. (2019) the authors analyze a range of first order linear operators generated by diagonal matrixes. To be precise, letting B = diag(β 1 , β 2 , β 3 , β 4 ),

Brinkmann et al. (2019) treats first order operators B defined as
That is, instead of viewing ∇v as a 2 × 2 matrix as we do, in Brinkmann et al. (2019) ∇v is represented as a vector in R 4 . In this way, the symmetric gradient Ev in (7.6) can be written as However, the representation above does not allow to consider skewed symmetric gradients B s,t (v) with the structure introduced in (7.8). Indeed, let s = t = 0.2. We have Rewriting the matrix above as a vector in R 4 , we obtain That is, we would have

Numerical Simulations and Observations
Let A be the operator defined in Sect. 7.2, and let where, for 0 ≤ s, t ≤ 1, B s,t are the first order operators introduced in (7.8). As we remarked before, the seminorm PGV 2 B s,t interpolates between the T GV 2 and N sT GV 2 regularizers. We define the cost function C(α, s, t) to be C(α, s, t) := u α,B s,t − u c L 2 (Q) .
To explore the numerical landscapes of the cost function C(α, s, t), we consider the discrete box-constraint (7.10) We perform numerical simulations of the images shown in Fig. 1: the first image represents a clean image u c , whereas the second one is a noised version u η , with heavy artificial Gaussian noise. The reconstructed image u α,B in Level 2 of our training scheme is computed by using the primal-dual algorithm presented in Chambolle and Pock (2011).

Fig. 2
From the left to the right: mesh and contour plot of the cost function C(ᾱ, s, t) in whichᾱ = (ᾱ 0 ,ᾱ 1 ) is fixed, (s, t) ∈ [0, 1] 2 (see also Table 1below). That is, the reconstructed image uα ,Bs ,t is indeed "better" in the sense of our training scheme (L 2 -difference).
We again remark that the introduction of PGV α,B [k] regularizers into the training scheme is only meant to expand the training choices, but not to provide a superior seminorm with respect to the popular choices T GV 2 or N sT GV 2 . The fact whether the optimal regularizer is T GV 2 , N sT GV 2 or an intermediate regularizer is completely dependent on the given training image u η = u c + η.
We conclude this section with a further study of the numerical landscapes associated to the cost function C(α, s, t). We consider also in this second example the discrete box-constraint in (7.10), and we analyze the images shown in Fig. 3: also in this second example the first image represents the clean image u c , whereas the second one is a noised version u η . The reconstructed image u α,B in Level 2 of our training scheme is again computed by using the primal-dual algorithm presented in Chambolle and Pock (2011).
We report that the minimum value of (7.9), taking values in (7.10), is achieved at α 0 = 5.6,α 1 = 1.2,s = 0.8, andt = 0.2. The optimal reconstruction uα ,Bs ,t is the last image in Fig. 3, whereas the optimal result with B s,t ≡ E, i.e., uα ,T GV , is the third image in Fig. 3. Although the optimal reconstructed image uα ,Bs ,t and uα ,T GV do not present too many differences with respect to our eyesight, we do have, also in Namely, the reconstructed image uα ,Bs ,t is indeed "better" in the sense of our training scheme (L 2 -difference).
To visualize the change of cost function produced by different values of (s, t) ∈ [0, 1] 2 , we fixᾱ 0 = 5.6 andᾱ 1 = 1.9 and plot in Fig. 4the mesh and contour plot of C(ᾱ, s, t) as follows.

Conclusions
We have introduced a novel class of regularizers providing a generalization of T GV 2 to the case in which the higher-order operators can be different from the symmetric gradient. After establishing basic properties of this class of functionals, we have studied well-posedness of a bilevel learning scheme selecting the optimal regularizer in our class in terms of a quadratic cost function. Eventually, we have shown some very first numerical simulations of our scheme. We point out that both examples in Figs. 1 and 3 do not present a clear distinction to the naked eye with respect to their T GV counterpart although performing much better in terms of the cost-function landscapes. We conjecture this behavior not to be the general case. Further numerical investigations are beyond the scope of this paper and will be the subject of forthcoming works.