A Unified Surface Geometric Framework for Feature-Aware Denoising, Hole Filling and Context-Aware Completion

Technologies for 3D data acquisition and 3D printing have enormously developed in the past few years, and, consequently, the demand for 3D virtual twins of the original scanned objects has increased. In this context, feature-aware denoising, hole filling and context-aware completion are three essential (but far from trivial) tasks. In this work, they are integrated within a geometric framework and realized through a unified variational model aiming at recovering triangulated surfaces from scanned, damaged and possibly incomplete noisy observations. The underlying non-convex optimization problem incorporates two regularisation terms: a discrete approximation of the Willmore energy forcing local sphericity and suited for the recovery of rounded features, and an approximation of the ℓ0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _0$$\end{document} pseudo-norm penalty favouring sparsity in the normal variation. The proposed numerical method solving the model is parameterization-free, avoids expensive implicit volume-based computations and based on the efficient use of the Alternating Direction Method of Multipliers. Experiments show how the proposed framework can provide a robust and elegant solution suited for accurate restorations even in the presence of severe random noise and large damaged areas.


Introduction
In spite of the remarkable progresses achieved in the fields of 3D scanning and 3D printing technologies, the available digitizing techniques often produce defective data samples corrupted by random noise and often subject to a local lack of data. Typically, this is mainly due to occlusions, surface reflection, scanner placement constraints, etc. In the context of digital restoration of cultural heritage art-works, for instance, the scanned object itself (e.g. the archaeological findings) may be incomplete and damaged due to the fact that some of its (missing) parts have been ruined over time due to wear and tear. In these cases, to facilitate the downstream processing of its digital content, the object shape needs to be denoised and repaired. Generally speaking, desirable properties of a surface repair toolkit shall include: • Feature-aware denoising: the removal of undesirable noise or spurious information from the data, while preserving original features, including edges, creases and corners, with special care on the robustness for defective and incomplete point sets. • Smooth hole filling inpainting: the process of recovering a missing or damaged region in the surface by filling it in a plausible way using available information. The result of a surface-inpainting operation depends on specific application considered. In digital cultural heritage restoration, for instance, surface inpainting is understood as the recovering of the holes in the data or the removal of the scratches/cracks possibly present in the scanned objects. In prototype manufacturing, the goal is shifted towards a waterproof virtual reconstruction, so that the related operation is rather interpreted as a smooth hole filling. Either case, all damaged areas should be filled in a seamless way that is minimally distinguishable from their surrounding regions. • Context-aware completion: when a priori knowledge on the missing/damaged parts of the scanned model is known, it is desirable that the completion of the damaged areas occurs by pasting known data-such as template patches-automatically or semi-automatically under user guidance. This allows, for example, to repair a damaged part of an artefact by filling the region of interest with a patch taken from a valid/undamaged region of the model itself or even from other 3D geometric models.
An example of the three geometric tasks processed by the proposed geometric variational framework is illustrated in Fig. 1. The original noisy and incomplete scanned angel mesh (see Fig. 1(left)) is denoised while keeping all the holes, see Fig. 1(center, first row). Then, the inpainting tool filled the holes smoothly, as shown in Fig. 1(center, second row) driven by the inpainting mask illustrated on the left. The large damaged region on the head is recovered by replacing a hair curl patch selected from a different, undamaged, mesh, see the recovered mesh in Fig. 1(center, third row). Finally, the completion of the damaged part together with hole filling is performed and illustrated in Fig. 1(right).
We propose a unified approach for these challenging geometric tasks by defining a variational problem encoding a priori knowledge of the particular problem (i.e. the mask operators) directly in the cost functional.
Here, it is assumed that a corrupted surface S embedded into R 3 and possibly characterized by the presence of a damaged (incomplete) region S D ⊂ S, is represented by a triangulated mesh M 0 = (V 0 , T 0 ) with V 0 ∈ R n V ×3 being the set of n V vertices, and T 0 ∈ R n T ×3 being the set of n T triangle faces, sharing n E edges.
The aforementioned three geometry processing tasks are addressed by means of the following unified variational formulation where χ S\S D : S → {0, 1} denotes the characteristic function of the subset S\S D , while the binary mask operators M E ∈ {0, 1} n E and M c E = 1 n E −M E characterize the specific surface geometry considered. As a result of the discretization on the triangulated mesh, the role of the characteristic function is played by a mask operator M V ∈ {0, 1} n V whose zero values identify the region S D .
The set of vertices V * solution of the unconstrained optimization problem (1) defines a restored triangulated surface M * = (V * , T * ) which provides a solution of the three surface geometry tasks, depending on the particular setup considered.
In case of surface completion, the region identified by S D is replaced by a given template patch P with boundary b P . In this case, the only compatibility assumptions required is that the boundary of S D in M 0 , named b 0 , and b P have the same number of vertices. If this is not the case, a suitable subdivision process can be preliminarily applied. The template patch P can be identified on the object itself as well as on other objects, thus allowing a mesh editing process.
The proposed approach does not need any global or even local 2D parameterization, nor any sophisticated octree data structures to efficiently solve implicit volumetric compu- Fig. 1 Applications of the proposed surface geometry framework: incomplete and noisy surface input M 0 (left); denoised surface M * (center, first row), inpainting mask M V and inpainting result M * (center, second row), context-aware completion of the curly hair detail without inpainting (center, third row); context-aware completion result M * with preliminary inpainting (right). The S D region is represented in blue in the masks tations [1,2]. The data are explicitly treated as connected samples of a surface embedded in R 3 .
The functional J (V ; M E ) in (1) is characterized by the presence of the sum of two regularization terms: the sparsitypromoting term R 1 (V ; M E ) and the sphericity-inducing penalty R 2 (V ; M c E ). Furthermore, a fidelity term F(V ; V 0 ), weighted by the scalar parameter λ ≥ 0, is used to control the trade-off between fidelity to the observations and regularity in the solution V * of (1).
The regularizer R 1 favours solutions with piece-wise constant normal map and sharp discontinuities. Such regularizer can be designed in a way to penalize a measure of the "roughness" or bumpiness (curvature) of a mesh, or, equivalently, to promote sparsity on this measure. A natural bumpiness measure for a surface is the normal deviation measuring the normal variation between adjacent triangles.
The ideal sparse-recovery term one would like to consider is the non-convex, non-continuous 0 pseudo-norm, but its combinatorial nature makes the minimization of (1) an NP-hard problem. We then rather consider as regularizer R 1 (V ; M E ) a sparsity-promoting parametrized non-convex term, whose form provides an effective control on the sparsity of the normal deviation magnitudes being more accurate than the 1 norm, while mitigating the strong effect and the numerical difficulties of 0 pseudo-norm. Numerical experiments will show its efficiency in handling high levels of noise, producing good-shaped triangles and faithfully recovering straight and smoothly curved edges.
As far as the R 2 regularization term is concerned, we choose it so as to encode a geometric energy, aimed to force local sphericity in correspondence of rounded regions. We considered here the Willmore energy, which has to be preferred over standard approaches based on mean curvature flow due to its scale invariance nature. Such energy is a quantitative measure of how much a given surface deviates from a round sphere, and it is defined by the following curvature functional where d A is the area element, and h and k are the mean and Gaussian rigidities, respectively. The Willmore energy E w (S) is non-negative and vanishes if and only if S is a sphere [3]. For compact and closed surfaces, and surfaces whose boundary is fixed up to first order, i.e. positions and normals are prescribed, finding the minima of (2) is equivalent to minimize the Willmore bending energy E h (S) = 1 2 S h 2 d A since the two functionals differ only by a constant (the Euler characteristic of the surface S), [4]. In this paper, we present a discrete Willmore energy, which, in contrast to traditional approaches, follows an edge-based discrete formulation.
Compared to the earlier version of this work [5] which focused only on surface denoising, this work further integrates the Willmore energy term in (1) to promote fairness and extends the model usability to more tasks.
From an algorithmic point of view, we solve the (nonconvex) problem (1) by means of an Alternating Direction Method of Multipliers (ADMM) scheme. This allows us to split the minimization problem into three more tractable sub-problems. Closed-form solutions for two of these problems can be found, while for the third, non-convex, one different optimization solvers can be used. For this substep, we compare standard gradient descent, with heavy ball and Broyden-Fletcher-Goldfarb-Shanno (BFGS) schemes, endowed with suitable backtracking strategy applied to guarantee the convergence to stationary points of the sub-problem considered.
Numerical experiments will demonstrate the effectiveness of the proposed method for the solution of several exemplar mesh denoising, inpainting and completion problems.
The rest of the paper is organized as follows. In Sect. 2, we review the main mathematical approaches related to the three geometry tasks considered. In Sect. 3, the proposed geometric variational model is presented; details on its numerical optimization by means of the ADMM-based scheme are described in Sect. 4. In Sect. 5, we briefly discuss the details on how each of the three task can be realized by solving the optimization problem (1). Experimental results and comparisons are given in Sect. 6. We draw the conclusion in Sect. 7.

Related Works
Standard numerical approaches solving the mesh denoising problem can be, essentially, divided into three classes. The first class inherits PDE-based techniques from analogous problems arising in image processing and addresses the task by using linear/nonlinear diffusion equations, see, e.g. [6,7], with particular care to preserve local curvature features [8]. Recently, also thanks to their considerable impact in the image processing field, two further major approaches have started to be investigated: data-driven and optimizationbased methods. Approaches belonging to the former class aim to learn the relationship between noisy geometry and the ground-truth geometry from a training dataset, see, e.g. [9]. Optimization-based mesh denoising methods formulate the mesh restoration problem as a minimization problem where a denoised mesh best fitting to the input mesh while satisfying a prior knowledge of the ground-truth geometry and noise distribution is sought. These approaches grew their popularity more and more also thanks to rapid development of studies on sparsity-inducing penalties. Among them, penalty terms aimed at approximating the 0 pseudo-norm have been directly applied for denoising mesh vertices in [10] and noisy point clouds in [11]. However, the strong geometric bias favoured by the use of the 0 pseudo-norm can produce spurious overshoots and fold-backs; hence, these methods may become computationally inefficient, even under small amounts of noise. Alternatively, an 1 penalty can be used. This is very frequent in recent sparse image/signal processing problems as well as in the mesh processing community, see, e.g. [12], where 1 -sparsity was adopted to denoise point sets in a two-phase minimization strategy. As it is well-known, the 1 norm tends to underestimate high-amplitude values, thus struggling in the recovery under high-level noise and presenting undesired staircase and shrinkage artefacts.
In this work, we focus on an optimization-based approach for mesh denoising and exploit a non-convex penalty approximating the 0 pseudo-norm so as to induce sparsity without artefacts and promoting fairness.
As far as the hole filling or repairing problem is concerned, standard approaches range from fourth-order surface diffusion PDE methods [13], to volumetric approaches mainly based on signed distance functions to implicitly represent the surface [14,15]. Other popular non-polygonal methods rely on Radial Basis Functions implicit interpolations [16], and Moving Least Squares [17]. A commonly adopted fairness prior is V 2 2 , proposed for smooth hole filling with the so-called least squares meshes, see [18].
Our penalty function also favours smoothness. However, unlike least squares meshes, we adopt a nonlinear curvature measure-the Willmore energy-which leads to a more rounded shape filling.
When some a priori knowledge on the missing part is available, we can do more than simply fill the hole as we can complete/repair the hole geometry with a context-aware template patch, with a minimally distinguishable transition zone. Many efforts have been devoted to the automatic selection of the template patch in the object itself, under a best-matching assumption in a context-sensitive manner [1,2], or by similarity between synthesizing geometry features [19].
Our completion proposal is rather based on the assumption that the patch to be pasted has been pre-selected and placed at the desired position by the user.

Variational Recovery Model
Solving the variational problem (1) on surfaces requires the definition of the discrete manifold representing the underlying object of interest as well as the discrete approximation of the first-order differential operators involved.
We thus assume M := (V , T ) to be a triangulated surface (mesh) of arbitrary topology approximating a 2-manifold S embedded in being the set of vertices, and T ∈ N n T ×3 the set of face triangles, the set of edges. We denote the first disk, i.e. the triangle neighbours of a vertex v i , by be the mapping computing the piecewise-constant normal field over the triangles of the mesh, where the m-th element is the outward unit normal at face Notice that the normal vector's sign depends on the orientation of the face. The desire for consistently oriented normals is that adjacent faces have consistent orientation. Under this discrete setting, the scalar functions x, y, z : V of the mesh M and are understood as piecewise linear functions.
We now introduce the discretization of the gradient operator on a 3D mesh. Since the normal field is piecewise-constant over the mesh triangles, the gradient operator vanishes to zero everywhere but the mesh edges along which it is constant. Therefore, the gradient operator discretization is represented by a sparse matrix D ∈ R n E ×n T defined by where l i = e i 2 , i = 1, . . . , n E is the length of ith edge. The matrix D can be decomposed as D = LD, with L = diag{l 1 , l 2 , . . . , l n E } being the diagonal matrix of edge lengths, whose values may be updated during the iteration scheme considered, andD ∈ R n E ×n T an edge-length independent sparse matrix.
Key ingredients of the proposed formulation (1) are the two operator masks M V and M E . The role of the mask M E is to adapt the recovery according to the surface morphology, while M V selects the region to be preserved in the inpainting and completion tasks.
M E is a sharp detection mask represented by a binary vector M E ∈ {0, 1} n E which has 1s in correspondence with sharp edges. Recalling that the dihedral angle associated with the edge e i is the angle between normals to the adjacent triangle faces τ and τ s which share e i , we classify e i as a sharp edge if the dihedral angle θ s ∈ [0, 360) is greater than a given threshold th. In formulas Given M E , its complementary mask is the vector Figure 2 shows M E for three different surface The influence of the choice of the mask M E in realizing the denoising task is shown in Fig. 3. The perturbed sharp sphere is illustrated in Fig. 3 on the left panel, and the denoised meshes on the right panel, obtained by applying the proposed method under the choice M E = 0 n E , M E = 1 n E , in Fig. 3b and e, respectively. The space-variant mask M E obtained with th = 30 and illustrated in Fig. 3c is applied to obtain the denoised mesh in Fig. 3d.
Stemming from the consideration by which a general scanned surface is characterized by sharp as well as rounded features, we specify the form of problem (1) to determine solutions V * which are close to the given data V 0 according to the observation model, where · 2 denotes the Frobenius norm. The functional in (6) involves three terms designed to meet three different and competing requirements that arise quite naturally from the intuitive concept of surface recovery: (1) fidelity to the known data; (2) a parametric (defined in terms of the parameter a ∈ R + ) discontinuity-preserving smoothing favouring piece-wise constant normals; (3) smooth connection between parts and inside unknown regions. The functional is composed by the sum of smooth convex (quadratic) terms and a non-smooth non-convex regularization term; thus, the functional J in (6) is bounded from below by zero, non-smooth and can be convex or non-convex depending on the values of M E and a.

Sparsity-Inducing Penalty
We aim at constructing a parameterized sparsity-promoting regularizer characterized by a tunable degree of non-convexity a ∈ R + inducing sparsity on the vector of components (DN ) i 2 , i = 1, . . . , n E , which represent the normal variation between adjacent triangles sharing the i-th edge.
A substantial amount of recent works has studied the class of sparsity-promoting parametrized non-convex regularizers, given their provable theoretical properties and practical performances [20,21]. We consider here one of the most effective representative of this class, i.e. the Minimax Concave (MC) penalty φ(·; a) : [ 0, +∞) → R, introduced in [22] and used previously in [23] applied to (DN ) i 2 in the context of mesh editing, defined by: which, for any value of the parameter a, satisfies the following assumptions: We denoted by φ (t; a) and φ (t; a) the first-order and second-order derivatives of φ with respect to the variable t, respectively.
The parameter a allows to tune the degree of nonconvexity, such that φ(· ; a) mimics the asymptotically constant behaviour of the 0 pseudo-norm for a → ∞, while behaves as an 1 regularization term, for values a approaching to zero. For values of a in between, the MC penalty function in (7) is a sparsity-inducing penalty which preserves sharp features in normal variations better than 0 -pseudo-norm regularizer, and more accurate than 1 regularizer which tends to produce shrinkage effects.
This motivated us to use it in the construction of the regularizer R 1 (V ; M E ).

Edge-Based Discretization of the Willmore Energy
Numerical approximations of the Willmore energy in digital geometry processing and geometric modeling are mainly based either on finite element discretization and numerical quadrature [24,25], or on discrete differential geometry approaches ab initio. Discrete isometric bending models, derived from an axiomatic treatment of discrete Laplace operators [26], the discrete conformal vertex-based energy welldefined for simplicial surfaces using circumcircles of their faces [3,27], and the integer linear programming approach [28] all fall into the latter class.
Here, we consider an alternative edge-based discrete approximation of the Willmore energy (2) for open triangulated surfaces M represented by polygonal meshes. This energy is a sum over contributions from individual edges where (DN ) j measures how the surface "curves" near e j .
To derive the continuum limit of (8) in the limit of vanishing triangle size, we assume that S is a two-dimensional manifold of arbitrary topology embedded in R 3 and parameterized by (X , ) with ⊂ R 2 , an open reference domain and define the corresponding coordinate map (that is, the parametrization of S at a given point). We denote the local coordinates in as (ξ 1 , ξ 2 ). For a given point x ∈ X ( ) ⊂ S, the tangent space T x S at x is spanned by , the induced metric is given by g i j = r i · r j , its inverse is denoted by g i j , so that g ik g k j = δ j i , or in matrix notation [g i j ] = [g i j ] −1 , and its determinant is defined as det(g) ≡ |g| = 1 2 ik jl g i j g kl = 1 2 (g i j g kl − g ik g jl ).
The second fundamental form I I : T x S × T x S → R is the symmetric bilinear form represented by the coefficients When the grid size of the triangulation M is sent to 0, the energy (8) approximates the Willmore energy as stated by the following Proposition 1.
Proof Let us first consider the integrand of (2) in the continuum, with h = κ 1 + κ 2 = tr(L i k ) being the mean curvature and k = 1 2 κ 1 κ 2 = det(L i k ) the Gaussian curvature where κ 1 , κ 2 represent the principal curvatures. The second fundamental form with components L i j relates with the linear map L k i with respect to the basis of T x S, according to the matrix equation: , and we denote L i j = k g ik L k j , 1 ≤ i, j ≤ 2. Following notations in [29], we use the identity g i j g kl = g ik g jl + il mn g m j g nk in the integrand of (2) as Substituting in (10) the Weingarten equations ∂ i n = L k i r k and L k i , i = 1, 2, we have g i j g kl L ik L jl = L j k L k j g k j g jk = L j k r j L k j r k g k j = ∂ k n · ∂ j n g k j (11) which is the gradient of the normal vector field. Therefore, replacing (10)-(11) in (2), we get S ∂ k n · ∂ j n g k j dS.
For sufficiently fine, non-degenerate tessellations M j approximating S, we consider a partition of the undeformed surface S into the disjoint union of diamond-shaped tiles,T , associated with each mesh edge e. Following Meyer et al. [30], one can use the barycenter of each triangle to define these regions or, alternatively, the circumcenters. Over such a diamond partition, the integral (12) is defined as the sum over all the diamond tiles, which reads If the triangles do not degenerate, we can approximate the area of the diamond related to the edge e i in M j by e i 2 , i.e. dT ≈ e i 2 , which implies that |∂ i n| 2 The result of the limiting process depends on the triangulations considered. In particular, we assume the triangulations of S consist of almost equilateral triangles. For our purposes, the discrete Willmore energy will be used based on the observation that (2) is invariant under rigid motions and uniform scaling of the surface, which implies that E(S) itself is a conformal invariant of the surface S, see [3].

Remark 1
Even if the introduced discrete formulation is very simple when compared with the ones introduced in [3,31], it practically produces good results. In order to validate the effective applicability of the proposed discrete Willmore energy, we evaluated E(M) in (8)

Numerical Solution of the Optimization Problem
In this section, we illustrate the ADMM-based iterative algorithm used to compute the numerical solution of (6). In order to define the ADMM iteration on triangular mesh surfaces, we first consider a matrix variable N ∈ R n T ×3 with row components defined as in (3) and resort to the variable splitting technique by defining t ∈ R n E ×3 as t := DN , where D is defined in (4). The optimization problem (6) can be thus reformulated as We define the augmented Lagrangian functional associated with problem (14) as where β 1 , β 2 > 0 are scalar penalty parameters, and ρ 1 ∈ R n E ×3 , ρ 2 ∈ R n T ×3 represent the matrices of Lagrange multipliers associated with the constraints. We now consider the following saddle-point problem: An ADMM-based iterative scheme can now be applied to approximate the solution of the saddle-point problem (15)- (16). Initializing to zeros both the dual variables ρ The updates of Lagrangian multipliers ρ 1 and ρ 2 have closed form. In the following, we show in detail how to solve the three minimization sub-problems (17), (18) and (19) for the primal variables t, N and V , respectively.

Sub-problem for t.
The minimization sub-problem for t in (17) can be explicitly rewritten as: where we omitted the constant terms in (15). Due to the separability property of φ(·; a), problem (22) is equivalent to n E independent, three-dimensional problems for each t j , j = 1, . . . , n E in the form where r (k+1) j where we conventionally set x 0 = 0. Necessary and sufficient conditions for strong convexity of the cost functions in (23) are demonstrated in [32]. In particular, problems (23) are strongly convex if and only if the following condition holds: We noticed that the sub-problem is always convex when t j has associated (M E ) j = 0, as it eliminates φ(·; a) from the sub-problem. Whenever (24) holds, the unique minimizers of (23) can be obtained in closed form as We remark that the condition on β 1 in (15) only ensures the convexity conditions (24) of t-subproblem (23), but does not guarantee convergence of the overall ADMM scheme. Sub-problem for N . The minimization sub-problem (18) for N can be reformulated as: The first optimality conditions lead to the following three linear systems, one for each spatial coordinate of N ∈ R n T ×3 Since β 1 , β 2 > 0, the linear system coefficient matrix is sparse, symmetric, positive definite and identical for all three coordinate vectors. The systems can thus be solved efficiently by applying, e.g. a unique Cholesky decomposition. At each iteration, the edge lengths diagonal matrix L in D = LD, defined in (4), needs to be updated as the vertices V move to their updated position. For large meshes, an iterative solver warm-started with the solution of the last ADMM iteration is rather preferred. A normalization is finally applied as N represents a normal field.
The reconstructed normal map N * obtained by solving (14) via the proposed ADMM satisfies the orientation consistency, as proved in [5], thus reducing the foldovers issue. This property is not trivially satisfied by most of the two-stage mesh denoising algorithms (normal smoothing and vertex update). They present the normal orientation ambiguity problem in the vertex updating stage, which provokes ambiguous shifts of the vertex position due to direction inconsistency of the normal vectors [33,34]. In [35], this issue is solved by an orientation-aware vertex updating scheme. Sub-problem for V . Omitting the constant terms in (15), the sub-problem for V reads The functional J V (V ) is proper, smooth, non-convex and bounded from below by zero. A minimum can be obtained applying the gradient descent (GD) algorithm with backtracking satisfying the Armijo condition or using the BFGS algorithm. In order to balance between the slow convergence properties of GD and the high computational costs required to compute the operators involved in the BFGS method, we also considered a heavy-ball type rule, following [36], and its extension with backtracking (covering also non-smooth problems) given in [37]. In particular, the heavy-ball method is a multi-step extension of gradient descent, which, starting from V (0) = V (k) , iterates over V as follows where α j > 0 is a step-size parameter and 0 ≤ δ j < 1. Note that for δ j = 0, (27) reduces to the gradient descent method.
In [37], convergence of the scheme above to stationary points is proved in the context of non-convex cost functions as the one in (26), with an extension also to non-smooth scenarios. All the numerical optimization methods here considered rely on a easily computable formula for the gradient of the functional J V in (26), which is derived in the following.

Proposition 2 Let s τ
. For all triangles m = 1, . . . , n T , . . , n V is nonzero only over the triangles sharing v i which are contained in the first disk D(v i ). Therefore, the sum in (26) is reduced to 2 , and the third term in (26) reduces to the scalar product g i since both N (k+1) m and N m (V ) have unitary norm. In order to compute ∇ v i (g i ), we resort on the following two properties, which hold for every constant vectors w, u ∈ R 3 and can be easily proved: To evaluate the product rule derivative, we apply property 2 for the left-side term constant, while property 2 is applied with w = z and Combining the results leads to the explicit formula for ∇ v i g i : which reduces to (28).
In Fig. 4 (first and second rows), we report the graphs showing both the energy decay and the gradient norm decay for the three different algorithms used, i.e. GD (with and without backtracking), BFGS and heavy-ball with backtracking. The plots are related to the meshes twelve (first column) and block (second column) as representative of the entire set of meshes analysed in the experimental section.
We remark that the use of Armijo-type backtracking rule is justified by the difficult expression (28), which makes the accurate estimation of the Lipschitz constant L J V of ∇J V (V ) quite challenging. In the proposed strategy, a (typically) initially large step-size α 0 is then reduced depending on whether the following inequality is verified: with c 1 ∈ (0, 1) and where V ( j) denotes the j-th update of V given by (27). From the convergence plots, we notice that upon a manual selection of a sufficiently small constant step-size α the convergence of plain GD without backtracking is as good as the one of the heavy-ball algorithm combined with backtracking. However, the former choice is problem-dependent; hence, a backtracking strategy automatically adjusting the value of α to an appropriate size is preferred. The rigorous analysis of the convergence properties of our proposed three block ADMM scheme following, e.g. [38] is not easy to derive. However, we will provide some evidence of the numerical convergence in Sect. 6.

A Practical Use of the Geometry Repair Framework
In the following, we provide some details for the practical use of the geometric framework introduced above in view of its application to the three different tasks we are interested in.
Feature-aware mesh denoising. The goal of a surface denoising algorithm is to remove undesirable noise or spu-rious information on a 3D mesh, while preserving original features, including edges, creases and corners. The restored surface is a 3D mesh that represents as faithfully as possible a piecewise smooth surface, where edges appear as discontinuities in the normal field. To achieve this goal, the natural choice is to set in (6) M V = 1 n V and define M E as in (5), so as to distinguish salient edges from smooth regions. In case of severe noise, the estimate of the mask M E may be affected by false edge detections. In such case, we suggest to recompute the edge mask M E along the ADMM iterations.
Smooth hole filling/inpainting. In contrast to techniques for image inpainting, which make use of the given spatial structure of the data (the regular grid of an image), surfaces lack a natural underlying spatial domain, which brings an additional degree of freedom in the setting of problem. At the same time, vertices' positions encode both function values and the domain of the function to be reconstructed. The initial mesh M 0 = (V 0 , T 0 ) thus has to be set as the original (possibly noisy) incomplete mesh with trivially enclosed and labelled disconnected holes-region S D -marked as zeros in M V . On the other hand, the mask M E can still be defined as in (5), by additionally forcing zero values on the edges in S D . The proposed geometric repair algorithm then performs simultaneously denoising, outside the holes, and smooth filling in the internal part of the holes, through the regularizer R 2 .
Context-aware completion. In some applications, smooth filling of holes is not sufficient: this is the case in archaeology and in general cultural heritage applications where the main goal is the reconstruction of a digital twin of a cultural heritage object. Some parts of the original 3D model can be damaged or missing but can be completed by means of characteristic parts taken from the object under consideration or from others. Given the original incomplete mesh M 0 with a region of interest bounded by a curve b 0 and characterized by verticesV ⊂ V 0 and trianglesT ⊂ T 0 , together with a template patch P = (V P , T P ), bounded by a curve b P , we build a repaired mesh M * by replacing (V ,T ) by (V P , T P ) and blending the two parts through the proposed variational model.
Note that, in case the region of interest on M 0 that has to be completed is a hole, then trivially (V ,T ) are empty sets.
We assume that the template patch P is properly aligned in the correct position and that both polygonals b 0 and b P are approximants of oriented, closed, simple curves in R 3 with the same number of vertices. The correct positioning can thus be performed either automatically (by rigid body transformation algorithms) or through user interaction.
A narrow band around b 0 , named stri p(b 0 ), containing at least 2-disk of triangle neighbours adjacent to b 0 , plays the role of S D . Hence, M V is the characteristic function of M 0 \stri p(b 0 ), i.e. is zeros only on stri p(b 0 ).
The operator mask M E has values one for each sharp edge in both M 0 \stri p(b 0 ) and P. According to the user desiderata, the blending can be performed in three different ways: edges in stri p(b 0 ) all zeros in M E to force a smooth joint with the template; edges in stri p(b 0 ) all ones, to keep a sharp connection; edges in stri p(b 0 ) defined by the spatially adaptive M E in (5) to maintain geometric continuity G 0 /G 1 over the blended region.
The vertices V * of the completed surface M * are obtained by minimizing (6), properly initialized with V (0) = (V 0 \V )∪ V P , while maintaining the connectivity defined by T * = (T 0 \T ) ∪ T P . The connectivity T * is automatically achieved as we imposed b 0 ≡ b P .
We refer the reader to Fig. 1 for visual representation of the three different tasks performed. Moreover, Sect. 6 offers additional insights.

Numerical Examples
We validate the proposed geometric framework both qualitatively and quantitatively on a variety of benchmark triangulated surfaces characterized by different sharpness and smoothness features and on some real datasets.
At the aim of a quantitative validation, meshes M 0 = (V 0 , T 0 ) have been synthetically corrupted. The noisy vertices in V 0 correspond to underlying noise-free vertices V GT by the following additive degradation model where the product η d accounts for the noise perturbations. Namely, η ∈ R n V is assumed to be at each vertex independently and identically distributed as a zero-mean Gaussian random variable, i.e. η i ∼ Gauss(0, σ 2 ), i = 1, . . . , n V , with known variance σ 2 , and d ∈ R n V ×3 is a vector field of noise directions with elements d i ∈ R 3 , i = 1, . . . , n V , which can be either random directions or the normals to the vertices. The perturbations are thus characterized by a noise level γ ∈ R + defined by σ = γl, withl representing the average edge length. Quantitative evaluation is done in terms of the following error metrics, which measure the discrepancy of the computed V * , N * w.r.t. the noise-free mesh V GT , N GT : • Mean squared angular error (MSAE) • L 2 vertex to vertex error (E V ) For all the tests, the iterations of the ADMM algorithm are stopped as soon as either of the two following conditions is fulfilled: Figure 5 shows the energy decay curve versus the number of iterations for some of the meshes reported in this section. We observe that for all meshes considered the energy converges to a stationary value. This represents an empirical validation on the numerical convergence of the proposed ADMM-based minimization scheme. Having performed a comparative analysis between inner solvers in Sect. 4, we used the GD algorithm with backtracking for solving the subproblem for V , with warm start strategy allowing us to restrict to a few number (three in our experiments) of GD iterations while achieving good relative accuracy.
With respect to the comparisons showed with competing approaches for mesh repairing, we remark that most of them are based on hierarchical data structures and combined with various heuristic algorithms. On the contrary, the results presented in the following are directly derived from the solution of the proposed unified mathematical optimization problem and do not require any heuristic post-processing procedures.
All the meshes are rendered in flat shading model and visualized using ParaView software.
Example 1: feature-aware denoising. To evaluate the performance of the proposed method for mesh denoising, we compared the results with other state-of-the-art variational methods for mesh denoising, namely the methods introduced in [10,33,34,39], which have been kindly provided by authors of [39] at https://github.com/bldeng/Guided Denoising, and a learning-based approach, presented in [9]. For each method, we show their best results achieved by tuning the corresponding set of parameters. Figure 6 shows the denoised meshes coloured by their mean curvature scalar map, with fixed range, together with zoomed details on mesh edges. From a visual inspection, we notice remarkable overlaps in the denoised meshes obtained from the other compared methods and severe perturbations   of the triangle shapes in the reconstructed meshes. To further demonstrate how robust our approach is w.r.t. to increasing noise perturbation, in Fig. 7 we reported qualitative and quantitative results for noise levels γ = {0.2, 0.2, 0.3, 0.4, 0.5, 0.6} -from top to bottom. In the last row, the mesh has been corrupted by arbitrary perturbations on the noise directions (d i ) in (32). Below each recovered surface, we report the quantitative evaluations according to the two error metrics (M S AE × 10 2 , E V × 10 6 ). Both quantitatively and qualitatively the results confirm the effectiveness of the proposed variational model in preserving sharp features while smoothly recovering rounded parts. Finally, we can comment on the efficiency of our algorithm which computational time is, on average, one order less than the 2 − 0 denoising method [11] which is the slowest, while it is comparable to the other compared methods.
To improve the estimation of mask M E for severe noise, we dynamically updated the edge mask M E every three ADMM iterations.
Example 2: hole filling/inpainting. We applied our geometric framework for the recovery of various meshes M 0 which exhibit holes or damaged parts. Fig.2 illustrates the basic workflow for the inpainting task on angel mesh which takes as input the original eventually noisy mesh M 0 (Fig. 2, left) and the inpainting mask M V , which can be of arbitrary topology, in the figure the holes to be filled are marked as 0 in M V and blue coloured. The recovery of angel mesh using smooth hole filling is illustrated in Fig. 2(second row). Figure 8 (first row) shows the challenging Igea mesh which presents a deep groove on the left side of the mouth and a shallower one on the right cheek. Our geometric framework was able to inpaint the shallower hole perfectly, while the deep one was filled in a satisfactory, even if not complete, way. This is justified by the different contribution of Willmore vs sparsity-inducing penalties. The latter acts more strongly with respect to the former, especially for high levels of noise. Hence, adding suitable weights to the two penalties could overcame to this disparity.
The data set minerva shown in Fig. 8 (second row, first column) presents a few holes caused by the scanner acquisition, in the head and under the nose. Moreover, a vertical strip has been intentionally added to the inpainting region S D in order to remove the groove provoked by the gluing of the two parts of the minerva's face. This dataset has been provided by ENEA, Bologna, Italy, and acquired by a VIVID laser scanner. The dataset presents inherent noise due to the optical acquisition system. The result of repairing the damaged geometry and filling surface holes is illustrated in Fig. 8c.
In Fig. 8 (third row), the inpainting framework has been applied to repair a shard from neolithic pottery received by the CEPAM laboratory (CNRS France), obtained by fusion of more fragments. The inpainting region, shown in Fig.8  (third row, second column), has been intentionally imposed to eliminate obvious fractures between joined fragments. Example 3: context-aware completion. We finally applied context-aware completion as an editing tool for seamless object fusion. Completion results for the meshes lion, screwdriver, and igea are illustrated in Figs. 9 and 10. The templates P smoothly complete the original surfaces.
A critical aspect in context-aware completion is the continuity imposed in the joint region, which we denoted by stri p(b 0 ). Conditions for geometric continuity between parametric surfaces are well assessed, while for meshes a rigorous treatment on this topic is still missing. In our framework, according to the user's desiderata, the template P can be joined to M 0 , both smoothly, by setting M E (strip(b 0 )) ≡ 0, in a sharp manner by setting M E (strip(b 0 )) ≡ 1, or in a blended fashion by simply using the M E mask of one of the two meshes (or even a combination of them). Therefore, imposing different continuity conditions for strip(b 0 ) means to define in a different way the mask M E in correspondence to the strip(b 0 ).
A typical example is shown in Fig. 11(left panel) where a synthetically created hole on the fandisk mesh M 0 is filled with a similar corner patch-template P cyan coloured. In the right panel, we report details onto the completion area M 0 ∪ P (a), output M * for M E (stri p(b 0 )) ≡ 0 (b), M * for M E (stri p(b 0 )) ≡ 1 (c), M * for M E (stri p(b 0 )) estimated from dihedral angles (d). Note that the initial Fig. 9 Examples of context-aware completion: M 0 with template patch P, output M * for two different incomplete meshes boundary b P was larger than b 0 and slightly shifted. Nevertheless, the feature-adaptive regularization perfectly respects the continuity of stri p(b 0 ), as illustrated in Fig. 11d, while a smooth mask- Fig.11b-destroys the sharp edges, and a non-smooth joint- Fig. 11c-creates artefact features.

Conclusions
We presented a novel geometric framework for denoising, inpainting and context-based completion for the recovery of damaged and incomplete scanned data. In contrast to volumetric approaches which use complex data structures and sophisticated procedures, we formulate the solution of the three tasks in terms of a single variational problem which is parameterization-free and normal consistent. The proposed approach is intended to repair damaged and incomplete meshes resulting from range scanning as well as for all modeling operations aiming at replacing damaged or missing parts of the surface. Future investigations will focus on the study of the theoretical convergence of the proposed numerical algorithm, which minimizes a functional that is spatially variant and characterized by a convexnon-convex structure: it varies spatially from being convex (due to the presence of the Willmore energy) to non-convex (MC penalty) according to the mask operator M E . Nevertheless, the algorithm demonstrates empirical convergence and very satisfying practical performance. The encouraging results can further be extended to the completion of missing part of objects and template patches with boundaries of different topology in order to validate the process on more realistic cases. Finally, a future direction will be to couple the proposed approach with image inpainting models favouring the completion of texture-like regions.