Stabilization of spline bases by extension

We present a method to stabilize bases with local supports by means of extension. It generalizes the known approach for tensor product B-splines to a much broader class of functions, which includes hierarchical and weighted variants of polynomial, trigonometric, and exponential splines, but also box splines, T-splines, and other function spaces of interest with a local basis. Extension removes elements that cause instabilities from a given basis by linking them with the remaining ones by means of a specific linear combination. The two guiding principles for this process are locality and persistence. Locality aims at coupling basis functions whose supports are close together, while persistence guarantees that a given set of globally supported functions, like certain monomials in the case of polynomial splines, remain in the span of the basis after extension. Furthermore, we study how extension influences the approximation power and the condition of Gramian matrices associated with the basis, and present a series of examples illustrating the potential of the method.


Introduction
In applications, many approximation tasks can be formulated as a minimization problem of the form a(u, u) − 2y(u) → min, u ∈ B, (1) where a is an inner product and y is a linear functional on the finite-dimensional function space B. Often, B is generated by a family of locally supported basis functions. In particular, spline spaces together with a plethora of variants are of interest.
To be useful in practice, the chosen basis should guarantee a certain approximation power, and it must be stable in the sense that the Gramian system AU = Y resulting from (1) is well-(or at least not seriously ill-) conditioned.
Tensor product B-splines are known to combine good approximation properties with uniform stability. But even here, caution is required. Consider, for instance, the seemingly harmless L 2 -approximation of a given function on the 3d unit cube by quartic B-splines with integer knots. The resulting Gramian system of size 125 × 125 has a spectral condition number of about 1.8 × 10 18 , and its numerical solution will produce useless results 1 unless special care is taken. Of course, this specific problem is easily circumvented, for instance, by replacing uniform B-splines by the Bernstein basis of the space of triquartic polynomials on the unit cube.
More persistent problems arise if the domain underlying the situation is not compatible with the shape of the supports of the basis functions, as for tensor product B-splines restricted to a domain with curved boundary. Then, typically, there will exist basis functions with only a small fraction of their support lying inside the domain, what is known to cause instability. Other sources of instability are for instance lacunary data in scattered data problems.
Different methods are known to deal with such issues. For instance, Tikhonov regularization can be applied, [1,20,21]. Here, (3) is replaced by a(u, u) + λq(u, u) − 2y(u) → min, u ∈ B, λ > 0, where q is a bilinear form penalizing unwanted behavior of the target function. For instance, when seeking a fair surface in geometric modeling, this functional could be the integral over the squared Frobenius norm of the Hessian, q(u, u) = Ω D 2 u 2 fro . Typically, the challenge of Tikhonov regularization is the determination of a suitable parameter λ > 0. It must be big enough to achieve the requested stabilization effect and it must be small enough to obtain a sufficiently accurate approximation of the solution of the original problem. The problem is well analyzed (see references above), but still, finding an optimal value in a specific situation is a non-trivial task. In the context of stabilizing bases with pruned elements near the boundary, Tikhonov regularization seems to be of limited use. We are not aware of compelling results in that direction.
Another standard approach to unstable linear systems is preconditioning. Also here, no systematic treatment of problems of the type described above seems to be available-with one exception: For tensor product B-splines, many (but not all) of the instabilities stemming from a trimming process can simply be healed by diagonal preconditioning. That is, the basis functions are scaled such that the diagonal of the Gramian is constantly 1. In [14], this approach is analyzed, and its benefits and limitations are discussed.
In the bivariate case, there exists an efficient, highly specialized technique, called condensation, to convert pruned standard tensor product B-splines into a stable basis, cf. [17]. Hierarchical variants are unknown so far.
If greater generality is requested, extension offers a solution. Here, B-splines near the boundary of the domain are coupled with inner ones to achieve stability. This process is one of the two building blocks of so-called weighted extended B-splines (web-splines), cf. [6][7][8]. The extension rules for uniform and non-uniform tensor product B-splines, based on the reproduction of polynomials and a locality principle, turn out to be quite simple and easy to implement. Furthermore, it is known that, under mild assumptions, the resulting spline bases combine full approximation power with optimal stability. First steps towards the extension of other spline spaces have been taken. For instance, in [13], a procedure for the stabilization of hierarchical Bsplines is described in the context of isogeometric analysis, but extension is restricted to the finest level so that standard rules can be applied. By contrast, the method suggested in [12] is not subject to such a restriction.
Following the ideas developed in the latter reference for hierarchical B-splines, we generalize the extension principle to a broad class of bases, covering not only polynomial, trigonometric, and exponential splines, but also box-splines, splines over triangulations, and many more. In particular, it is applicable to hierarchical spline spaces [4,5,10], LR splines [3,15], and T-splines [19]. These spaces are of special importance for the efficient solution of higher-dimensional approximation problems, like the approximation of elliptic PDEs in 3d. As for web-splines, basis functions that are recognized as critical for stability are linked to non-critical ones by means of a linear combination. Preferably, a locality principle is used for the selection of candidates for the coupling, and the extension coefficients are determined such that a specific class of globally supported functions remains in the span of the modified basis.
The paper is organized as follows: In the next section, we describe a general extension procedure. Then, in Section 3, some properties of extended bases are discussed. In particular, we show how the condition of the Gramian and the approximation power are influenced. The results show that a significant improvement of the condition can be expected without reducing the approximation power of the given spline space significantly. Finally, in Section 4, we present a series of examples that demonstrate the functionality and potential of the proposed method.

Extension
For a finite index set K let B := [b k ] k∈K be a sequence of functions b k : R d → R with compact support s k := supp b k . The span of the restriction to the domain of interest Ω ⊂ R d is denoted by B := span B |Ω . Without loss of generality, we assume that B |Ω is a basis of B, i.e., dim B = #K. In the applications we have in mind, the b k are B-splines, L-splines, box splines, splines over arbitrary triangulations in R d , or variants thereof, like weighted or hierarchical splines. The common feature of all these cases is the following: B contains a lower-dimensional space P := span P |Ω , where P = [p r ] r∈R is a finite sequence of functions p r : Ω → R with the property that the restriction to any non-empty open subset ω ⊂ Ω is linearly independent, i.e., dim P |ω = #R. In particular, P may be a space of pure or weighted polynomials of some kind (algebraic, trigonometric, exponential, and combinations thereof).
Typically, the global approximation power of B is related to the local approximation power of P and the size of the supports s k .
Recalling (1), we consider quadratic minimization problems of the form where a is an inner product and y is a linear functional on B. Applications include continuous and discrete least squares problems, smoothing splines, and the approximate solution of elliptic partial differential equations via their variational formulation. The coefficients U = [u k ] k∈K of a minimizer u = k u k b k solve the linear system where the Gramian matrix A has entries A k, = a(b k , b ) and Y is the vector with entries Y k = y(b k ). Different reasons may hamper the solution of the above system. In particular, functions b j , j ∈ J, with a relatively small part of their support inside Ω may cause severe ill-conditioning. Another well-known phenomenon may occur in scattered data approximation. If the data sites are lacunary so that certain basis functions b j , j ∈ J, have no data site in their support, a is only semi-definite and the coefficients u j , j ∈ J, are not determined uniquely.
The strategy we propose here to solve such issues assumes that a subset {b j : j ∈ J } of critical functions causing instabilities or a loss of positive definiteness is known. Typically, the number #J of critical functions is much smaller than #K, but all that we formally assume is existence of a non-empty open subset ω ⊂ Ω \ j ∈J s j . Essentially, this means that Ω is not completely covered by the supports of the b j . Building on the non-critical {b i : i ∈ I }, we construct new functions using suitably chosen extension coefficients e j,i ∈ R. Preferably, the support s e i := supp b e i should be comparable in size with s i , what can be achieved by using nonzero coefficients e j,i only for indices j corresponding to critical functions b j whose support s j is close to s i . This guideline is called the locality principle. In matrixvector notation, we also write with a matrix E of size #K × #I and submatrices E I , E J corresponding to rows with indices in I, J , respectively. E I = Id is just the identity, while E J is to be determined. The second crucial request is that B e := span B e |Ω still contains P, what is referred to as the persistence principle.
Locality and persistence suggest that, at least qualitatively, the approximation properties of the full space B are kept by B e , while the new linear system Let us discuss the two steps in more detail: The first step is straightforward, and different methods exist to determine M. If available, one employs dual functionals ϕ k , characterized by ϕ k b k = δ k,k , to find the entries of M, Otherwise, M may be determined by solving one global or a series of local linear systems of the form with a suitable set {x : ∈ L} of interpolation points. The second step is more interesting. Concerning existence of a solution of the linear system E J M I = M J , we note that By assumption, the dimension of the span of the left-hand side is dim P |ω = #R. Hence, rankM I = #R, showing that E J M I = M J is solvable. However, typically, the number #I of rows of M I is much larger than #R so that the solution E J is not unique. According to the locality principle, any reasonable strategy to select suitable extension coefficients from the range of possibilities aims at keeping supports s e i of the extended functions b e i small. This suggests good approximation properties of B e and expedient sparsity of the Gramian matrix A e . Let us consider the determination of a single row E j = [e j,i ] i∈I of E J , satisfying E j M I = M j , j ∈ J . It is not necessary to use more than #R non-zero entries in E j . Denoting the corresponding vector by I (j) = [i 1 , . . . , i #R ] with pairwise different indices i r ∈ I , we set e j,i = 0 for i ∈ I (j). The remaining unknown entries of E j satisfy the quadratic system ( 2 ) That is, the problem of determining E j is reduced to finding a suitable vector I (j) and the subsequent solution of a typically small linear system. Preferred candidates for the indices i r are those which are close to j in some sense. For instance, one can request that the diameter of s j ∪ s i r is small. Once such a vector I (j) is determined, the above linear system is treated. If it turns out that M I (j) is singular, one of the indices i r ∈ I (j) causing linear dependence of rows is replaced by a formerly unused index i r , and the solution process is restarted. When M I (j) is invertible-and the above observation concerning full rank of M I shows that this eventually happensthe vector E j corresponding to E j,I (j ) can be used as the j th row of E J .
After all rows of E J have been determined, it is suitably combined with the identity matrix E I to obtain the matrix E, and extension is finished.

Properties
The construction presented in the previous section has the following basic properties: The linear systems AU = Y and A e U e = Y e , characterizing minimizers u and u e of (1) on B and B e , respectively, are related by The first statement is a prerequisite for the functionality of extension, and the second one confirms the persistence principle. The third property makes known dual functionals of the standard basis available for the extended case. Below, we will make repeated use of this fact to prove theorems concerning the spectrum of the Gramian and the approximation power. Eventually, the last statement is helpful to derive the linear system corresponding to the extended system from the given one by simple matrix multiplications without explicit recomputation of the entries.
Next, let us discuss the effect of extension on the condition of the Gramian matrix. The spectral condition numbers of the Gramian matrices A and A e are given by where λ max , λ e max and λ min , λ e min denote the maximal and minimal eigenvalues, respectively. To this end, we consider the behavior of maximal and minimal eigenvalues in turn.
The following simple theorem shows how λ e max can be bounded in terms of λ max and the extension coefficients.

Theorem 2 The spectral radii of the Gramian matrix A and its extended equivalent
where · 1 and · ∞ denote the column and row sum norm, respectively.
Proof Denoting by · 2 the spectral norm, we find λ e The concrete computation of the factor E 1 E ∞ is simple, but without knowledge of the specific situation, no a priori estimates can be given. However, if the maximal distance between supports of basis functions that are coupled by extension is known, it may be possible to find a bound on the maximal extension coef- Typically, these numbers are small. A detailed study of the case of tensor product B-splines, establishing uniform bounds for Lipschitz domains, can be found in [8].
The possible increase of the maximal eigenvalue of the Gramian is an unwanted, but inevitable side effect of extension. Typically, it is modest and easily bearable when at the same time the minimal eigenvalue can be raised by many orders of magnitude. The analysis of the minimal eigenvalue is slightly more involved and, specializing our general setup, carried out for inner products of Sobolev type, with m ∈ N 0 and nonnegative weight functions w μ . This covers many kinds of least squares fits and variational formulations of partial differential equations. Allowing the weights w μ to be generalized functions in the sense of distributions, also discrete problems like scattered data approximation can be addressed. Let Ω k , k ∈ K, be a family of open subsets of Ω, called local domains, such that Ω k ∩ s k = ∅. We define the local bilinear forms where N := sup x∈Ω #{k : x ∈ Ω k } is the maximal number of local domains containing the same point. Furthermore, we define the sets K(k) := {k ∈ K : s k ∩ Ω k = ∅} of indices of basis functions not vanishing on Ω k . More precisely, the local domains Ω k must be chosen such that the local bilinear forms a k are positive definite on the subspace B k := span {b k : k ∈ K(k)} generated by the locally relevant basis functions. That is, the local Gramian A k with entries has to be invertible. While the trivial choice Ω k := Ω is always possible, much smaller sets may be feasible and more appropriate in the concrete setting. It is easy to see that the functions with coefficients given by the k-th row of A −1 k establish the Riesz representation of a family of dual functionals ϕ k := a k (λ k , ·), Introducing the constants we find the following lower bounds:

Theorem 3 For a bilinear form a according to (3) the minimal eigenvalues of the Gramian matrices A and A e are bounded from below by
respectively.
Proof Let u = BU = k∈K u k b k ∈ B. Then, by (5) and Cauchy-Schwarz, Furthermore, by (4), Using the Rayleigh quotient, we obtain The arguments in the extended setting are almost the same. The theorem suggests a simple way to control the minimal eigenvalue of the Gramian: Classifying all basis functions b k as critical for which γ k = a k (λ k , λ k ) is greater than a given threshold C yields e ≤ C 2 and hence λ e min ≥ 1/(C 2 N). In particular, semi-definite problems, characterized by λ min = 0 and = ∞, can be regularized that way (see Example 4.5).
It should be noted that the actual value of γ k depends on the choice of the local domain Ω k . If Ω k is too small, the estimate may be not sharp enough to be useful. For instance, for a(u, v) = uv and quartic B-splines with integer knots, one can chose Ω k as the central knot interval of s k . This yields N = 1, γ k ≈ 1.9 × 10 4 , and 1 Nγ k ≈ 5.2 × 10 −5 .
By contrast, Ω k = s k yields N = 5, γ k ≈ 18.6, and 1 which is better by three orders of magnitude. Even larger local domains are possible, but at a certain point, the effect of shrinking γ k is overcompensated by the increase of N. In general, as a rule of thumb, it is sensible to consider Ω k = s k and to modify this set only if the corresponding local bilinear form a k is not positive definite. Of course, in many applications, other criteria than the value of γ k may be used to select critical basis function. For instance, the classification process of web-splines [6] is purely based on the existence of a complete grid cell of the support of a B-spline inside the domain.
Finally, we study the relation between extension and approximation power. Clearly, B e ⊂ B implies that approximation in B is typically better than in B e , but it can be expected that this loss of accuracy is merely a matter of constants, and does not reduce the order of convergence. Below, to keep things simple, we present a prototypical pointwise estimate, which can be generalized and refined in many ways. Moreover, we formulate the error estimate only for the extended basis, having in mind that the standard case is also covered when setting J = ∅ and E = Id.
Following a convenient way to establish results on the approximation power of function spaces generated by local bases, we assume two ingredients to be available: First, given a family [ϕ k ] k∈K of dual functionals ϕ k : B → R, there exist extensions ψ i : L ∞ (Ω) → R, i ∈ I , of the ϕ i to to the space of bounded functions which are uniformly bounded and local in the sense that the constant is finite, where · ω denotes the sup-norm on the set ω ⊂ Ω. The corresponding quasi interpolant is defined by Second, the function space P admits a Bramble-Hilbert-type estimate of the following form: There exists a seminorm | · |, a constant c = c(Ω, P), and an exponent n > 0 such that for any function f : Ω → R with finite |f | and any ball := {y ∈ Ω : |y − x| ≤ h} with radius h centered at x ∈ Ω, there exists an approximation p ∈ P of f satisfying f − p ≤ c |f | h n (6) (see [16] for a detailed study of the polynomial case).
Theorem 4 Given x ∈ Ω, define the index set I (x) := {i ∈ I : x ∈ s e i } and let be the ball centered at x with radius If f : Ω → R is a continuous function that can be approximated locally in P according to (6), then is the Lebesgue constant of the standard basis.
Proof Let p = i∈I b e i p i ∈ P be the approximation of f on according to (6). By Theorem 1, The first summand on the right hand side is bounded by For the second one, we obtain The first factor is bounded by Since s e i ⊂ for all i ∈ I (x), the second factor satisfies max The combination of the last five displayed inequalities proves the claimed estimate.
Let us briefly discuss how extension influences the estimate given by the theorem: First, for standard bases, say B-splines restricted to a curved domain, the constant α can be excessively large so that the given estimate becomes meaningless. Classifying basis functions as critical for which the norms of the associated dual functionals are large and according extension may decrease α significantly and thus improve the estimate. Of course, this is just a way to make the used proof technique more effective, and will not increase the actual approximation power.
Second, the norm E ∞ of the extension matrix should be bounded in some way. This request is evident and appears equally in the context of controlling the spectral radius of the Gramian matrix (see Theorem 2).
Third, the maximal size h of extended supports containing x is crucial for the estimate, and care must be taken that they are not unduly enlarged by the extension process. This observation confirms the importance of the locality principle when choosing indices I (j) of basis functions to be coupled with critical b j .

Examples
In this section, we discuss a few examples of increasing complexity to illustrate functionality and potential of the proposed method.
The support s 6 = [−1, 2] of b 6 contains only the two data sites x 13 , x 14 , which are located near its end points and leave a big gap in between. The least squares fit u with standard B-splines B yields small deviations at the data sites, but the shape of f is poorly captured in the area of the gap (see Fig. 1, (left)). The condition number cond A ≈ 7.2 × 10 +5 of the Gramian system is relatively high, but certainly not critical for the numerical solution. Rather, the problem is related to the function space B itself. The effect of the inherent instability becomes apparent if we consider a small changeỹ 13 = y 13 + δ of the function value at the site x 13 . It causes a much bigger change of the approximating splineũ. For instance, at the point t = 1/2, it isũ(1/2) ≈ u(1/2) + 130 δ. This oversensitivity accounts for the unsatisfactory result and makes the setup useless for applications. Of course, the demonstrated effect becomes worse and worse as the problematic data sites move closer to the boundaries of the support s 6 . Even though it is quite evident that b 6 and only b 6 should be considered as critical, we want to substantiate this assessment by analyzing the local bilinear forms a k . The natural choice Ω k = s k for the local domains is possible for k ∈ {6, 7}. However, s 6 and s 7 contain only 2 respectively 3 data sites for always 5 basis functions so that the corresponding bilinear forms a 6 and a 7 would not be positive definite. Setting contains only nine functions, but the new minimizer u e is a significantly better global approximation of f than u (see Fig. 1 (right)). The condition number of the Gramian system is reduced to cond A e ≈ 31, and changing y 13 by δ leads to a comparable change of the approximation,ũ e (1/2) ≈ u e (1/2) + 1.5 δ.
The above choice I (6) = [4, 5, 7] seems natural, but is by no means unique. In particular, I (6) = [5,7,8] is equally suitable. Abandoning the locality principle, even more possibilities come into consideration. Figure 2 shows the two special cases I (6) = [7,8,9] (left) and I (6) = [3,4,9] (right). The condition of the Gramian systems is about as good as before. However, the extended supports are unnecessarily large, and consequently, the quality of the approximation is reduced, especially in the latter case. As a variant of the above setting, we replace polynomial splines by exponential and trigonometric splines. More precisely, assuming −α/π ∈ N to exclude degenerate cases, let for I (6) = [4,5,7]. A purely numerical computation of extension coefficients, as sufficient for applications, is of course not more complicated than in the polynomial case. Figure 3 confirms the beneficial effect of extension also in the non-polynomial setting.

Peculiarities with hierarchical B-splines
In this example, we demonstrate that the linear system (2) is not always solvable when dealing, for instance, with hierarchical B-splines. These problems are related to a lack of local linear independence. Figure 4 shows a quadratic hierarchical basis B = [b 1 , b 2 , b 3 , b 4 , . . . , b 9 ] with two levels according to Kraft's construction [10]. The first three B-splines have integer knots, while the six remaining ones have half-integer knots. Again, the monomials of degree ≤ 2 are reproduced by means of a matrix M, which is unique because of the global linear independence of B. We find If, for instance, b 1 is critical, the choice I (1) = [2,8,9] is natural and suggests small extended supports. However, the resulting linear system ⎡ ⎣ 1 3/4 1/4 7/2 9/8 3/8 is not solvable. In this special case, it is easily seen that b 8 and b 9 must not be used simultaneously since the last two columns of M t are linearly dependent. By contrast, choices like I (1) = [2,3,9] or I (1) = [2,7,9] are admissible.

Scattered data approximation in 2d
While Example 4.1 was kind of artificial and could have been solved also by an improved choice of knots, the following multivariate scenario is more realistic: The function to be reconstructed from scattered (and equally structured or continuous) data is defined or known only on a subset of the plane. In such cases, in general, there will exist basis functions located near the boundary with only tiny fractions of their supports inside the domain. Typically, such functions are critical-not only by affecting negatively the condition of the Gramian system, but also, and even worse, by potentially producing significant shape artifacts. A first study of this problem and its solution by means of extension can be found in [7]. Concretely, we consider the least squares approximation of the function f (x, y) = xy(1 − x 2 − y 2 )e 2x on a quarter of the unit disk, ≥0 : x 2 + y 2 ≤ 1 , by means of the Zwart-Powell element b ZP , which is the box spline associated with the directions = 1 0 1 −1 0 1 1 1 .
b ZP is C 1 and piecewise cubic on a quincunx-type tessellation of the integer grid. Its support is an octagon inside the square [−1, 2] × [0, 3]. The basis we use is selected from integer translates, scaled down to grid size h = 1/9, Given The basis we find consists of #K = 104 functions. The condition number cond A ≈ 1.1 × 10 +11 is rather high, but simple diagonal preconditioning reduces it to 2.1 × 10 +6 . Thus, a numerical solution can be computed with high accuracy. Still, as depicted in Fig. 6, the resulting minimizer u reveals a distinct aberration near the boundary. The magnification on the right hand side shows that the approximation is in fact very close to the given data points, plotted as blue dots, and that the protrusion occurs right in between. The reason becomes apparent if we take a closer look at the circumstances in the suspect part of the domain (see Fig. 5 (right)). Consider the basis function b j with the rose-shaded support. It "sees" only the single data pointx 5 , marked by the arrow. In order to minimize deviation, the coefficient u j of b j is determined such that u interpolates f atx 5 . Now, incidentally, this pointx 5 is very close 2 Shifts of Zwart-Powell elements are not linearly independent, but have a one-dimensional kernel, i,j (−1) i+j b i,j = 0. Therefore, to define a basis, one element has to be omitted. Here, we choose the bottom left function b −1,−2 , centered at (−1/18, −1/18). to the boundary of the support. Consequently, b j (x 5 ) ≈ 1.2 × 10 −5 is very small and u j ≈ −237.8 is unduly large in modulus. Essentially, the aberration is an image of the function u j b j restricted to the small, but not negligible part of the support s j inside Ω.
Since the span of of the shifts of b ZP contains all quadratic but not all cubic polynomials, we choose P = [1, x, y, x 2 , xy, y 2 ] for extension. Exemplarily, we classify only b j as critical. Figure 7 (right) shows the six selected neighboring functions and the corresponding extension coefficients. The resulting condition number 3 is cond A e ≈ 2.7 × 10 +6 , but more importantly, as shown in Fig. 7 (left), the approximation u e is rectified. And indeed, the maximal error f − u ∞,Ω ≈ 0.21 of the standard approximation is reduced by two orders of magnitude and amounts to f − u e ∞,Ω ≈ 2.0 × 10 −3 .
The phenomenon illustrated here can be considered a matter of likelihood, but we note that it is persistent in the sense that it can always occur for scattered data approximation, no matter how dense the data set is with respect to the spline grid.

Poisson's equation in 2d
This example briefly revisits weighted extended B-splines (web-Splines) [6,8] as an already known special case of the general extension principle introduced here.
We (see Fig. 8 (left)). To satisfy the boundary conditions, standard tensor product Bsplines b i , i ∈ Z 2 , of order n with knot spacing h are multiplied by the weight function w(x, y) = 1 − (2x − 1) 2 − (2y − 1) 2 to generate the approximation space The red lines in Fig. 9 show the condition numbers of the system matrix for quadratic (left) and cubic (right) weighted B-splines for different values of the grid with h. They become excessively high, what makes the approach impractical for the use with iterative solvers. By contrast, the green lines show the significantly improved results after extension. Here, the condition grows moderately at rate O(h −2 ), as to be expected for a good basis. The optimal rate of convergence O(h n ) of the L 2 -error for web-splines of order n with grid width h is confirmed by Fig. 8 (right).

Discrete least squares approximation in 3d
Eventually, the target application of the general procedure developed here is the stabilization of multivariate hierarchical B-spline bases. While univariate and also many bivariate problems can be solved using standard splines, the capacities of current computing technology are easily overcharged when refining trivariate knot grids uniformly to increase accuracy. This explains the necessity of spline spaces whose resolution is adapted locally to the function to be approximated, in particular in higher dimensions.
In this example, we seek an approximation of the function with tensor product B-splines of order n on the interior of a model of the Stanford bunny with 35536 triangular faces, placed inside the cube [−1, 1] 3 . This function has a distinct peak at the origin, which is best resolved using a hierarchical setup as described in [10]. Here, we use the software package G+SMO [9,11], kindly provided by Bert Jüttler and his group. Figure 10 (left) shows the function evaluated on the boundary of the domain.
An approximation of f in the L 2 -sense requests the integration of functions on grid cells that are cut off by the boundary. Even if this task seems to be purely technical, it is not easily accomplished (see [2,18] for two recent references). Instead, we employ a discrete least squares fit based on an evaluation of f on a set of points containing for each B-spline all grid points and all midpoints of grid edges, faces, and cells of its support inside the domain (see Fig. 11 (left)) for an illustration of the analogous setting in the 2d case for three overlapping biquadratic B-splines from three different levels. Thus, the compilation of the Gramian matrix A becomes very cheap as precomputed values for the entries can be used.
Without extension, A can be singular. This phenomenon is not related to the use of hierarchical B-splines, but can already be observed in the uniform case. It is caused by small clusters of data points near the boundary of the domain that are covered by too many basis functions. Figure 11 (right) illustrates the problem in the bivariate case. The circles mark the centers of three biquadratic B-splines that vanish on all but the two topmost data points. Thus, locally, there are more degrees of freedom than conditions, what causes the singularity of the Gramian matrix. Concretely, the spline with three nonzero coefficients as depicted in the figure vanishes at all data points.
In the example, a hierarchical spline space of order n is created iteratively. Starting from uniform knots with spacing h = 1/8, a discrete least squares approximation is computed together with the error at the data sites. For each data point x where the error exceeds the required accuracy 1 × 10 −4 the B-spline b k whose support center is closest to x is refined. This means that b k is eliminated from the current basis and replaced by the (n + 1) 3 B-splines with half knot spacing whose support is contained in s k (see [10] for details). The new basis generated this way is then used for the next round of approximation. Figure 12 (left) shows the maximal error as the algorithm proceeds. For orders n = 3, 4 and n = 5, 6, the desired accuracy is achieved after four and three iterations, respectively. To stabilize the basis by extension, B-splines without a complete grid cell of their support inside the domain are classified as critical. Figure 12 (right) shows the resulting condition numbers of the Gramian. For comparison, the dashed lines show the L 2 -condition numbers of trivariate B-splines of according order with integer knots on the cube [0, 10] 3 . The results show that for given order the condition numbers depend only marginally on the geometry of the domain or the hierarchical arrangement of B-splines. We remark that diagonal preconditioning reduces the observed condition numbers further by several order of magnitude.

Conclusion
Extension, as well-known for tensor product spline spaces, is generalized to a large class of function spaces that are generated by local bases. The main advantage of extension is the stabilization of the basis in the sense that condition numbers of Gramian matrices can be reduced significantly. This may help to improve accuracy of numerical solutions and accelerate the convergence of iterative solvers. In case of scattered data problems, extension may also reduce artifacts caused by peripheral data sites.
Funding Open Access funding enabled and organized by Projekt DEAL.

Conflict of interest
The authors declare no competing interests.

Adv Comput Math (2022) 48: 23
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.