Abstract
In this paper, we present a scattered data approximation method for detecting and approximating the discontinuities of a bivariate function and its gradient. The new algorithm is based on partition of unity, polyharmonic kernel interpolation, and principal component analysis. Localized polyharmonic interpolation in partition of unity setting is applied for detecting a set of fault points on or close to discontinuity curves. Then a combination of partition of unity and principal component regression is used to thinning the detected points by moving them approximately on the fault curves. Finally, an ordered subset of these narrowed points is extracted and a parametric spline interpolation is applied to reconstruct the fault curves. A selection of numerical examples with different behaviors and an application for solving scalar conservation law equations illustrate the performance of the algorithm.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Fault detection or approximation of curves across which a function is discontinuous is one of the useful and interesting problems with applications in different areas such as edge detection in image processing, geophysical science, oil industry, and tomography [3,4,5, 9, 27, 44, 49, 55]. In all cases, a segmentation of an enclosed area that is related to a particular phenomenon is needed. In addition, in some approximation methods, the discontinuity curves and regions should be known in advance in order to obtain accurate solutions [6, 8].
For a given bivariate function, the curves on which the function itself and gradient of the function are discontinuous are called ordinary and gradient faults, respectively. Most of fault detection methods cannot distinguish between these two kinds of faults. However, recently some algorithms have been developed for determining both ordinary and gradient faults of an underlying function [11,12,13].
The basis of all fault detection methods includes the use of a derivative operator to indicate a cloud of points near the discontinuity curve. Some algorithms use a pure interpolation operator and Gibbs phenomenon measurements near discontinuity [2, 28, 36, 37, 44], and some others use gradient and Laplacian operators to define an indicator [12]. In [11], a central difference operator and a statistical procedure are applied and in [5], a local Taylor expansion and a polynomial annihilation criterion are used to indicate the fault clouds.
Assume that a set of scattered points with corresponding function values are available. In this paper, to deal with the non-uniform nature of scattered data set, we employ a localized meshfree approximation based on polyharmonic kernels in combination with partition of unity (PU) method for detecting a non-organized set of points on or close to the (unknown) faults of a function for which the data may have originated. A gradient- and a Laplacian-based indicators are defined for identifying a cloud of fault points from an original data set. In each PU patch, we apply the polyharmonic interpolation on scaled data points to prevent the instability of kernel matrices. Noting that, interpolation by polyharmonic kernels is the only scalable approximation among all radial basis function (RBF) approximations [20, 31].
At the first step, our algorithm results in an unorganized cloud of points near the discontinuity curve. To get an accurate reconstruction of the fault, we employ a principal component regression (PCR) [34, 35, 39] in combination with PU approximation to generate a second set of points which are supposed to be closer to the fault curve than the primary detected set. We apply PCR instead of linear least-squares approximation for better curve fittings especially in subregions on which the fault curve has a near vertical direction.
At the final step, an ordered subset of the previous narrowed fault points is extracted and a smooth parametric spline interpolation is employed to reconstruct the fault curve.
We also address the situations with multiple fault curves and special cases with intersections and multi-branch configurations.
Sufficient number of numerical examples illustrates the performance and efficiency of the method, including the detection and reconstruction of multi-branch and closed faults, and an application for solving a conservation law problem via the finite volume method (FVM).
2 Polyharmonic spline interpolation
Polyharmonic spline (PHS) interpolation is a particular case of interpolation with conditionally positive definite RBFs [24, 31, 53]. Assume that \(\phi :\mathbb {R}^{d}\to \mathbb {R}\) is a conditionally positive definite function of order m + 1, i.e., with respect to polynomial space \(\mathbb P_{m}(\mathbb R^{d})\). Let \({\varOmega }\subset \mathbb R^{d}\) be a bounded region. The RBF interpolation of a function \(f:{\varOmega }\to \mathbb R\) on a discrete set X = {x1,…,xN}⊂Ω is given by
where {p1,…,pQ} is a basis for \(\mathbb {P}_{m}(\mathbb {R}^{d})\), and α = (α1,…,αN)T and a = (a1,…,aQ)T satisfy
where \(K\in \mathbb {R}^{N\times N}\) with Kjk = ϕ(xj − xk),k,j = 1,…,N, \(P\in \mathbb {R}^{N\times Q}\) with Pjn = pn(xj), j = 1,…,N, n = 1,…,Q, and f = (f(x1),…,f(xN))T. We also need to assume \(N\geqslant Q\) and X is \(\mathbb {P}_{m}^{d}\)-unisolvent to have a full rank matrix P. On the other hand, since ϕ is conditionally positive definite of order m + 1, the symmetric matrix K is positive definite on \(\ker (P^{T})\) as a subspace of \(\mathbb R^{N}\). These all guarantee that the interpolation system is uniquely solvable. The interpolant sf,X can also be written in the Lagrange form as
where (u1(x),…,uN(x))T =: u(x) satisfies
for ϕ(x) = (ϕ(x − x1),…,ϕ(x − xN))T and p(x) = (p1(x),…,pQ(x))T. The Lagrange functions possess the property uj(xk) = δkj.
In the case of PHS interpolation, the function ϕ is defined as ϕ(x) = φ(∥x∥2) for all \(x\in \mathbb R^{d}\) where
for real number β > 0. The PHS function φ is (up to a sign) conditionally positive definite of order m + 1 = ⌊β/2⌋ + 1.
Derivatives of f are approximated by corresponding derivatives of sf,X, i.e.,
where [Lu1(x),…,LuN(x)]T =: Lu(x) is solution of the same system (4) with the new right-hand side \(\begin {bmatrix} L \boldsymbol {\phi }^{T}(x) L p^{T}(x) \end {bmatrix}^{T}\), i.e.,
Tackling the ill-conditioned system (4) is one the important issues in RBF approximation methods. As it is proved in [53, Chap. 12], for polyharmonic kernels, the condition number of this system grows algebraically with respect to the minimum spacing distance between interpolation points. To overcome this problem, the polyharmonic interpolation matrix can be formed and solved on scaled data points with a spacing distance of \(\mathcal O(1)\). This is motivated by the 5-point star classical FD formula for Δu(0,0) on points X = {(0,0),(h,0),(−h,0),(0,h),(0,−h)}. If points are scaled to
then the stencil weights are obtained as [4,− 1,− 1,− 1,− 1], and the original weights are scaled as h− 2[4,− 1,− 1,− 1,− 1]. Here, 2 on the power of h is the scaling order of Δ. In the RBF context and on scattered points, this approach is applicable only for PHS and excludes all other well-known kernels. For more details, see [20, 31, 32]. In a more general case, assume that X is a set of points in a local domain \(\mathcal D\) with fill distance h. Assume further that the polyharmonic approximation of Lu(x) for a fixed \(x\in \mathcal D\) is sought. Assume that L is a homogeneous operator with scaling order (homogeneity) s. For example, the scaling order of L = Δ is s = 2 and the scaling order of L = Dα is s = |α|. If X is blown up (scaled) to points \(\frac {X}{h}\) of average fill distance 1 and Lagrange functions Luj are calculated for the blown-up situation, then the Lagrange functions of the original situation are scaled as h−sLuj. When polynomials of degree m are appended and monomials \(\{x^{\alpha }\}_{|\alpha |\leqslant m}\) are used as a basis for \(\mathbb P_{m}(\mathbb R^{d})\), it is recommended to shift the points by the center of \(\mathcal D\) and then scale by h to benefit from the local behavior of monomial basis functions around the origin. See [23, 24, 33, 43] for some applications of this strategy in localized RBF methods for numerical solution of PDEs and rational interpolation of singular functions.
3 Partition of unity approximation
The partition of unity (PU) approximation will be used twice in our detection algorithm. In approximation theory, the PU method is an efficient technique to obtain a sparse global approximation by joining a set of localized and well-conditioned approximations. The first combination of partition of unity with RBF interpolations goes back to [52] and [53] by Holger Wendland (see also [24]). Then, this approach has been extensively used for numerical solution of partial differential equations [14,15,16,17,18, 40, 43, 45]. A recent application to implicit surface reconstruction is provided in [21].
In PU methods, the global domain Ω is covered by a set of open, bounded, and overlapped subdomains Ωℓ, ℓ = 1,2,…,Nc, where Ω ⊂∪ℓΩℓ, and a set of PU weights is defined on this covering. The nonnegative and compact support functions \(w_{\ell }: \mathbb R^{d} \to \mathbb R\), ℓ = 1,2,…,Nc, with supports \(\overline {{\varOmega }_{\ell }}\) are called PU with respect to covering {Ωℓ} if
If we start with an overlapping covering {Ωℓ} of Ω and we assume Vℓ is an approximation space on Ωℓ and sℓ ∈ Vℓ is a local approximation of a function f on Ωℓ, then
is a global PU approximation of f on Ω. This global approximation is formed by joining the local approximants sℓ via PU weights wℓ. A possible choice for wℓ is given by Shepard’s weights
where ψℓ are nonnegative, nonvanishing, and compactly supported functions on Ωℓ. In a usual way, derivatives of f are approximated by derivatives of s, i.e.,
or in a general form, for a linear differential operator L with constant coefficients, we have
So, the Leibniz’s rule should be applied to compute the derivatives of products wℓsℓ. This standard technique is complicated, somewhat, and requires smooth PU weight functions. In [43], an alternative approach is suggested:
where Dαf is directly approximated without any detour via the approximant s of f itself. Analogously, for a general case with operator L, we may write
Theoretical results show the same convergence properties for both standard and direct approaches [43], while the second approach is simpler and allows discontinuous PU weights to develop some faster algorithms for approximating the derivatives and solving partial differential equations.
The RBF-PU is a special case in which the local approximants Lsℓ are obtained by the RBF approximation on trial sets Xℓ = X ∩Ωℓ in local domains Ωℓ ∩Ω with global index family Jℓ := {j ∈{1,…,N} : xj ∈ Xℓ}. In this case, we have
for the standard approach, and
for the direct approach. Here, uj(ℓ;x) are Lagrange RBF functions on patches Ωℓ. A simple covering for Ω can be constructed via a set of overlapping balls Ωℓ = B(yℓ,ρℓ) where \({y}_{\ell }\in \mathbb R^{d}\) are patch centers and ρℓ are patch radii. To have a fixed patch radius ρℓ ≡ ρ, we can use the following setup for points, parameters, and domain sizes. We assume the set X has fill distance
The fill distance indicates how well the points in the set X fill out the domain Ω. Geometrically, h is the radius of the largest possible empty ball that can be placed among the data locations X inside Ω. A data set \(\{{y}_{1},\ldots , {y}_{N_{c}}\}\) with space distance
is used for patch centers. The constant Ccov controls the number of patches, Nc, compared with the number of interpolation points in X. The radius ρ should be large enough to guarantee the inclusion Ω ⊂∪ℓB(yℓ,ρℓ), and to allow enough interpolation points in each patch for a well-defined and accurate local approximation. Thus, we assume
and we let the overlap constant Covlp be large enough to ensure the above requirements. It is also possible to assign variable patch sizes ρℓ per any patch center yℓ. For example, we can obtain ρℓ is such way that there exists a certain number of interpolation points in each patch Ωℓ. In this case, we must sure that the inclusion property Ω ⊂∪ℓΩℓ is still satisfied. In numerical examples of Section 7, we use both fixed and variable patch radius strategies.
Smooth Shepard weight functions are frequently used in PU approximations (see, for example, [40, 45, 52]). A discontinuous PU weight is also suggested in [43] that highly simplifies the RBF-PU algorithms for solving partial differential equations. Assume the PU weight wℓ(x) takes the constant value 1 if yℓ is the closest center to x and the constant value 0, otherwise. For definition, let
and \(I_{\min \limits ,1}(x)\) be the first component of \(I_{\min \limits }(x)\), as \(I_{\min \limits }(x)\) may contain more than one index ℓ. The weight function is then defined by
With this definition, we give the total weight 1 to the closest patch and null weights to other patches. In fact, a local set Xℓ = Ωℓ ∩ X is a common interpolation set for all evaluation points xk with ∥xk − yℓ∥2 ≤∥xk − yj∥2 for j = 1,…,Nc and j≠ℓ. In another view in a 2D domain, by drawing the Voronoi tiles of centers \(\{{y}_{1},\ldots ,{y}_{N_{c}}\}\), this means that all evaluation points in tile ℓ use the same local set Xℓ as their interpolation set [43].
4 Principal component regression
Assume that \(X\in \mathbb R^{d\times n}\) is a data matrix containing n points (n columns) in the d dimensional space. PCR finds the best k rank approximation to d dimensional data matrix X for 0 < k < d. It is known that a low rank approximation can be solved by the singular value decomposition (SVD). Let \(\mu _{X}\in \mathbb R^{d\times 1}\) be the vector of sample means along each row of X. Subtract each columns of X (each point) by μX and denote the resulted mean zero data matrix by X0. The SVD of X0 is given by
where \(U\in \mathbb R^{d\times d}\) and \(V\in \mathbb {R}^{n\times n}\) are orthogonal matrices and \({\varSigma }\in \mathbb {R}^{d\times n}\) is a diagonal matrix carrying the singular values σj of X decreasingly on its diagonal. Equivalently
where principal components uj and vj are columns of U and V, respectively. Then for \(k\leqslant d\)
is the closest rank k matrix to X0 (with respect to 2, Frobenius and any matrix norm that depends only on singular values) where Uk and Vk consist of first k columns of U and V, respectively, and Σk = diag{σ1,…,σk} [51]. Clearly, the data matrix Xk which is obtained by adding the mean vector μX to all columns of \({X_{k}^{0}}\) is the closest rank k matrix to X. Columns of Xk (as points in \(\mathbb {R}^{d}\)) are located on a k dimensional subspace of \(\mathbb {R}^{d}\).
In this paper, we need the special case d = 2 and k = 1 for narrowing a cloud of fault points around a discontinuity curve in a two-dimensional domain Ω (see Section 5.2).
5 Detection algorithm
Assume that f is a piecewise smooth real valued function with finite jump discontinuity across some curves on a two-dimensional domain Ω. If the union of such curves is denoted by \(\mathcal F\), we assume that the measure of \(\mathcal F\) is zero and f is smooth in \({\varOmega }\setminus \mathcal F\). Assume further that a set of scattered points X ⊂Ω and associated function values f(x), x ∈ X, are given. The algorithm consists of three steps: (1) picking out a set of points, called fault points as a subset of X, on or close to discontinuity curves using a partition of unity polyharmonic-based approximation, (2) narrowing the fault points using a partition of unity and PCR algorithm, and (3) constructing the fault curve using a parametric interpolation. In the following subsections, we will describe these steps.
5.1 Fault point detection
We aim to detect a small subset F of X consisting of points close to \(\mathcal F\) by considering a procedure based on local kernel interpolation. The points in F are called fault points. In [12], a minimal numerical differentiation formula (MNDF) based on local multivariate polynomial reproduction is given to approximate the function derivatives. This approach is a generalized finite difference (FD) method and the stencil weights are uniquely determined by minimizing their weighted ℓ1 or ℓ2 norm in a suitable way [19].
In this paper, we use the direct PU approximation for localization and PHS kernels for local approximations. We measure and compare the approximate gradient and Laplacian values with some threshold parameters to find a set of fault points F close to the fault curve \(\mathcal F\).
In each patch Ωℓ, we apply the discontinuous PU weight (8) and PHS kernels φ1(r) := r and φ3(r) := r3 to approximate gradient and Laplacian functions, respectively. Note that, φ1 and φ3 are conditionally positive definite of orders 1 and 2, respectively, and thus polynomials of orders at least 1 and 2 need to be appended to their corresponding RBF expansions to obtain well-defined interpolations on unisolvent sets.
According to the PU procedure, each point xk is subjected to a local set Xℓ = Ωℓ ∩ X for an index ℓ ∈{1,…,Nc} in which \(\ell =I_{\min \limits ,1}(x_{k})=:\ell (k)\). The difference between this approach and the RBF-FD method is that in the new method a set Xℓ may be shared with many points xk while in the RBF-FD, each stencil Xk is associated with a unique evaluation point xk. This special type of the direct RBF-PU (D-RBF-PU) method [43] is similar to (but not identical with) the overlapped RBF-FD method of [46]. From (7) and (8), we have
where \(\xi _{j,k}^{L}= L u_{j}^{*}(\ell (k);x)|_{x=x_{k}}\) are generalized (related to operator L) Lagrange function values on set Xℓ = {xj : j ∈ Jℓ} evaluated at xk. In programming, we loop over patches and look for indices of points xk in which a prescribed patch ℓ is their closest patch. Such index family will be denoted by k(ℓ).
Usually, the gradient and the Laplacian operators (∇ and Δ) are used in fault curve detection algorithms. An ordinary fault indicator is defined as
and a gradient fault indicator is defined as
A large value of I∇(xk) shows that the smoothness of f is deficient at or near xk, and the large value of IΔ(xk) indicates the same behavior for both f and its gradient. For quantification, we define
where L stands for either ∇ or Δ, and δ is a proper threshold parameter. Thus, we mark F(X,∇,δ1) and F(X,Δ,δ2) as points which are close to ordinary and gradient faults, respectively, where δ1 and δ2 should be set appropriately to get accurate detections. In [12], a marking method based on the median values of the computed indicators is applied. Since for functions with constant (linear) values in large areas of the domain, the indicator I∇ (IΔ) is close to zero at points belonging to those areas, the medians fall around the zero, and thus many points are indicated as fault points even if they are not close to a fault curve. To overcome this possible problem, a doubly nested marking strategy is used in [12]. Here, we still use medians but with a different strategy. Using notations I∇(X) and IΔ(X) for {I∇(xk) : xk ∈ X} and {IΔ(xk) : xk ∈ X}, respectively, we set
where h is the fill-distance of X, and CM, CG, and CL are proper constants that should be set by the user manually. In the above process, we first obtain the threshold parameter δ2 by (11) to form the set F(X,Δ,δ2) containing points close to both ordinary and gradient discontinuities. Then the points close to ordinary faults can be extracted from this set instead of the initial large set X. Thus, the threshold parameter δ1 is obtained by (12) and ordinary fault points F∇ are obtained as
Obviously the set of points close to gradient discontinuities, FΔ, is
Experiments show that FΔ still contains some points in the neighborhoods of ordinary discontinuities. Following [12], we modify FΔ as follows:
where B(x,η) is a ball with center x and radius η on set F(X,Δ,δ2), and |B| denotes the cardinality of set B. Parameter η is set proportional to fill-distance of the initial data set such that B(x,η) contains a sufficient number of points.
Algorithm 1 presents the fault point detection procedure step by step. Algorithm 2 is called in Algorithms 1, and 3 is called in Algorithm 2.
We note that steps 1 and 2 of Algorithm 1 can be run simultaneously using a proper nearest search algorithm.
5.2 Narrowing step
In this subsection, we give a narrowing procedure to move a cloud of points approximately on a fault curve that may be of either ordinary or gradient type. Let us denote the set of detected fault points by
as a small subset of the initial set X. To approximate the fault curve, the set F needs to be narrowed more.
In [12], an orthogonal distance least squares regression is used in this step to handle the cases in which the points near a parametric curve are distributed almost vertically. In fact, the least-squares approximation is used twice: for a coordinates rotation at first and for a curve fitting then (see also [50]). One of advantages of this method is that it can be used to obtain regressions of arbitrary orders. In this paper, we use the standard PCR method to obtain a linear regression using SVD, which seems enough for our algorithm. First, we consider the set F as a 2 × M matrix where each column stands for a point in \(\mathbb R^{2}\). Then we compute the centered data matrix F0 = F − μF and its reduced SVD F0 = UΣVT. Here μF is the sample mean of rows of F. Finally, we accept the best first rank matrix approximation \(F_{1}=U_{1}{\varSigma }_{1} {V_{1}^{T}} + \mu _{F}\) as narrowing points (see Fig. 1).
But since the fault curves are usually nonlinear, we apply this procedure on local subdomains and blend the local approximants in a proper way to obtain a global nonlinear configuration. For this purpose, we employ a new PU approximation based on a new set of patches \(\{{\varOmega }^{\prime }_{\ell }\}\) for \(\ell =1,\ldots ,N^{\prime }_{c}\). In this step, we use a constant radius ρ = Covlphcov for all patches, i.e., \({\varOmega }^{\prime }_{\ell }=B(y^{\prime }_{\ell },\rho )\) for \(\ell =1,\ldots ,N^{\prime }_{c}\). The set of patch centers \(\{y^{\prime }_{\ell }\}\) is a coarsened subset of detected fault points F which is obtained by Algorithm 5 (below) for H1 = H2 = ρ and \(\tilde F=F\). Note that, the size of the PU problem in this step is considerably less than that of the first PU approximation because now we are working on a much smaller set of points around a one-dimensional fault curve.
We assume \(F^{\prime }_{\ell } = F\cap {\varOmega }^{\prime }_{\ell }\) and \(J^{\prime }_{\ell } = \{j: x_{j}\in F^{\prime }_{\ell }\}\) with \(|J^{\prime }_{\ell }| = n_{\ell }\). Now, the PCR algorithm is applied on each cloud \(F^{\prime }_{\ell }\) for \(\ell =1,\ldots ,N^{\prime }_{c}\) to get a new narrowing set Fℓ. As described above, if \(F^{\prime }_{\ell }\) and Fℓ are considered as 2 × nℓ matrices and the SVD of \(F^{\prime }_{\ell }-\mu _{F^{\prime }_{\ell }}\) is denoted by UΣVT then \(F_{\ell } = U_{1}{\varSigma }_{1}{V_{1}^{T}}+\mu _{F^{\prime }_{\ell }}\).
Up to here, we have \(N^{\prime }_{c}\) number of fault sets Fℓ which are supposed to be closer than the set F to the (unknown) fault curve. Depending on the amount of overlap between patches, the cardinality of ∪ℓFℓ is larger than that of the original set F. This means that a fault point xk which belongs to more than one patches, say n patches, has n different approximation points from n different sets Fℓ. To obtain a unique approximation for any fault point, we apply the PU approximation on covering \(\{{\varOmega }^{\prime }_{\ell }\}\). If we use the smooth PU weights (3), then a smooth combination of these n approximations gives a unique approximation \(\tilde x_{k}\) for xk. We prefer to apply the discontinuous weight (8) due to its simplicity. In this case, depending on which center \({y}^{\prime }_{\ell }\) is closer to xk, the approximation point in its corresponding set Fℓ is marked as the unique solution \(\tilde x_{k}\) for xk. Using this approach, we end with a new set
which is obtained by thinning the cloud of detected set F by moving the points close to the fault curve. To further narrow the set of obtained points, we can apply the narrowing procedure once again by replacing F by \(\tilde F\).
In programming, we apply this procedure by looping over patch centers rather than looping over fault points (see Algorithm 4).
Note that, we obtain local regressions per any patch and move several points xk, k ∈ k(ℓ), close to the fault curve, simultaneously.
5.3 Fault curve reconstruction
Usually, the narrowed set \(\tilde F\) includes lots of points giving us an opportunity to select an ordered subset of \(\tilde F\) to reconstruct the fault curve using a parametric approximation method. We use a method similar to that is given in [12, 41].
A point z from \(\tilde {F}\) is selected randomly and a new ordered set Ford is introduced which contains the only point z at the beginning but will be enlarged based on the following procedure.
First, we find the set of fault points Fz in H1-neighborhood of z, i.e.,
Then we obtain the direction uz for which the variance of points in Fz is maximized. It is well known that this direction is the first column of the U factor in the reduced SVD \({F_{z}^{0}} = U{\varSigma } V^{T}\) where \({F_{z}^{0}}\) is the mean zero data matrix. Now, two subsets \(F_{z}^{+}\) and \(F_{z}^{-}\) of Fz are formed as
Then points \(z^{+}\in F_{z}^{+}\) and \(z^{-}\in F_{z}^{-}\), if any, are chosen such that
and are added to set Ford. In fact, z+ and z− have maximum distances from z along the directions uz and − uz, respectively. This process is repeated from two points z+ and z− in both directions until no point is found in their neighborhood (see Fig. 2) or the distance between one of the newly selected points and one of the previously selected points in Ford (or in all previously sets Ford for cases with multiple fault curves (see Section 5.4)) is less than H1/2. Using the last condition and checking the newly found points in the last iteration, it can be checked whether the fault intersects itself (see the left-hand side of Fig. 3) or whether it may approach another fault (see the right-hand side of Fig. 3) (see Algorithm 5 for a more general case).
Finally, we end with a sequence of ordered points allowing to reconstruct the fault curve using a parametric approximation method. We will apply the parametric cubic spline interpolation by adding a small smoothing parameter. Alternatively, one can simply connect the points successively by line segments to form a polygonal instead of a smooth spline curve.
5.4 Cases with multiple fault curves
There may be more than one fault (of either ordinary or gradient type) in the domain. So we continue the above algorithm to find other faults as follows. Consider a new set that includes all points from \(\tilde {F}\) whose distance to points in Ford is greater than a parameter H2. If this new set is empty, the algorithm is finished and we have no other fault. Otherwise, we select a new random point z from the points in this new set and continue the previous procedure to obtain a new ordered set Ford for the next fault (see right hand side of Fig. 3). Again the parametric spline interpolation is applied to reconstruct the new curve. This algorithm is repeated until none of points in \(\tilde {F}\) has distance greater than H2 to points in all previous sets Ford.
In Algorithm 5, the steps of selecting a set of ordered fault points from the larger set \(\tilde F\) are outlined. This algorithm works for cases with multiple fault curves as well.
5.5 Cases with intersections
There may be some faults that intersect each other. Suppose that the described algorithm yields m different sequences of ordered points corresponding to m different faults. By construction, the sequences do not share the common points on different faults; thus, the reconstructed curves may not intersect even if their corresponding exact faults do intersect. To resolve this problem, we apply the following procedure which is similar to that is given in [12]:
-
1. The head and the tail of each fault is reconstructed by linear interpolation (line segments) based on the first two and the last two points, respectively (see upper panels of Fig. 4).
-
2. The line segments are extended out within Ω with length at most H3 and new end points (if located in Ω) are determined (see upper panels of Fig. 4).
-
3. For each new end point e with corresponding line segment ℓ, its closest points zp (on the positive side of ℓ) and zn (on the negative side of ℓ) from other sequences are selected. If distances ∥e − zp∥2 and ∥e − zn∥2 are less than a prescribed threshold H4, we mark the intersection of (extension of) ℓ and the line segment between zp and zn as an intersection point (see upper panels of Fig. 4).
-
5. Intersection points whose distances are less than a prescribed threshold H5 are replaced by their average (see the lower panels of Fig. 4).
-
The ordered sequences are updated either by adding one or two intersection points or by joining to another sequence and an intersection point.
In Algorithm 6, by [[x,y]], we mean a line segment with x and y as its end points.
6 Parameter selection and the main algorithm
The parameter H1 in Algorithm 5 is the approximate distance between two consecutive fault points in which the parametric interpolation is built upon. In order to have an accurate reconstruction, H1 must be selected small enough, but on the other hand, it should be large enough to ensure that if the narrowed points on a fault are spaced, the fault will not split into two or more pieces. Usually, H1 is set proportional to the fill-distance of the initial data set. When ordered points on one fault are obtained, the algorithm starts again from one new point on another fault. The distance between this new point to those of the previous faults is more than H2. Thus, H2 should be chosen large enough so that the new point falls on another fault and avoids reconstructing a fault twice. In our experiments, the value H2 = H1 leads to satisfactory results.
In Algorithm 6, the parameter H3 should be close to H1/2 because from Section 5.3, the minimum distance between two disjoint faults is determined by H1/2. We set H3 = H1/2. On the other hand, the threshold parameter H4 should be proportional to H1 because H1 is the maximum distance between two consecutive points in each ordered sequence. Here we set H4 = 2H1. Finally, H5 should be selected small enough to unite the intersection points that are very close to each other. The choice H5 = H1 is quite nice in all experiments.
Note that all parameters Hk, k = 1,…,5 are explicitly related to the approximate fill-distance h of the initial data set X via H5 = H4/2 = 2H3 = H2 = H1 = CHh. In our experiments, we use CH = 6. The fill distance h is approximated by \(h=1/\sqrt N\) where N is the number of initial points in X.
To approximate s∇ and sΔ for indicators, in the first PU algorithm, we use variable patch radius ρℓ per any patch center yℓ. Using a nearest search algorithm, we select n = 12 nearest points to yℓ and then adjust the radius ρℓ as the maximum distance between yℓ and that surrounding 12 points. In examples, \(Y=\{y_{1},\ldots ,y_{N_{c}}\}\) is assumed to be a grid set in the domain Ω with spacing distance hcov = Ccovh = 2.5h. This parameter selection guarantees the inclusion Ω ⊂∪ℓΩℓ for both random and Halton point sets in our numerical examples. Moreover, for the second PU algorithm in the narrowing step, we use constant radius ρ = Covlphcov where we set Covlp = 1.5. We also set CM = 1/4, CG = 1, CL = 1/2, and η = 4h in all examples unless specified otherwise. Finally, the main algorithm of the fault detection method can be written as follows.
7 Experimental results
In this section, the results of some experiments are given. The efficiency of the method is confirmed by testing it on various kinds of problems: problems with multiple faults of the same type (ordinary or gradient) and problems with faults of different types with or without intersections. The initial set X is assumed to be a sequence of N random points with uniform distribution on a square domain \({\varOmega }\subset \mathbb {R}^{2}\). We use N = 10000 random points (Fig. 5 left) in our test examples unless otherwise stated. For instance, in some examples, Halton points or a set of varying density random points are also used. The constant-generated weight function (8) is applied for both PU subroutines.
For the final curve reconstruction, we use the csaps function of MATLAB with smoothing parameter p = 0.9999 to obtain a cubic spline smoothing function on ordered fault points. The case p = 1 works as well but we choose a little bit smaller value to have smoother fits. In Example 7.7, we use p = 1 to better capture higher curvatures of the solution at fins and flukes of the dolphin.
We suppose that the type of faults (ordinary or gradient) is not known in advance. Thus, both gradient and Laplace indicators are used by default for all examples.
In Example 7.1, we compute the root mean distance of the detected points from a fine set of points on the real fault to measure the closeness of detected points to the exact fault or to measure the error of the final fault reconstruction. Assume that Z is a set of m points on the exact fault Γ and F is the set of detected points around Γ. We define the root mean distance dist(F,Z) by [12]
where |F| stands for cardinality of F. In the case of multiple faults, we measure the error for each fault separately. In experiments, we assume that Z is a set of m = 500 points on each individual fault.
All algorithms are implemented in MATLAB and executed on a machine with an Intel Core i7 processor, 4.00 GHz, and 16 GB RAM. The code is freely available at GitHub via https://github.com/ddmirzaei/FaultDetection to facilitate the reproduction of the examples presented in this section. The connection between scripts in the GitHub repository and the pseudocodes in the paper is as follows.
mfile | Pseudocode |
---|---|
FaultDetection.m | Algorithm 1 |
RBF_PU.m | Algorithm 2 |
LagrangeMat.m | Algorithm 3 |
PCR_PU.m | Algorithm 4 |
OrderedSubset.m | Algorithm 5 |
FixingIntersections.m | Algorithm 6 |
RunExample.m | Algorithm 7 |
Other subroutines in the repository are called in the above MATLAB functions.
7.1 Example 1
Consider the test function [12]
where (x,y) ∈ [0,1]2 =: Ω. As is distinguishable from the upper-left side of Fig. 6, this function has an ordinary and a gradient fault. Faults are exactly represented by
The upper-right panel of Fig. 6 shows the clouds of faults points which are detected by the algorithm out of N = 10000 random points in Ω. The lower-left panel shows the narrowed points and the ordered points. Finally, at the lower-right panel, the exact and reconstructed curves using smooth spline interpolation are depicted. We also test the algorithm on variable density random points (Fig. 5 right). The detected fault points and the final reconstructed curves are shown in Fig. 7. The algorithm handles this case perfectly because patch sizes are selected automatically. The only difference is that we use more patch centers in that part of the region with a higher point density.
To verify the accuracy of the method, we first discretize the exact ordinary and gradient faults either by Z, a set of m = 500 equidistant points. Then we compute dist(F,Z) where F is the set of primary detected fault points around the fault curve. In fact, dist(F,Z) measures the closeness of the detected points to the exact fault. We also assume that \(Z^{\prime }\) is a set of m = 500 equidistant points on the final reconstructed curve (after narrowing and curve fitting) and measure the root mean distance \(\text {dist}(Z^{\prime },Z)\) to estimate the error of the reconstruction. Results are given in Table 1 for both ordinary and gradient faults for N = 5000,10000,20000 random points with uniform distributions on [0,1]2. Results are comparable with [12] but seem better than the grid-based algorithm of [11].
Finally, we note that the errors for N = 10000 varying density points (Fig. 5 right) are dist(F,Z) = 5.2e − 3 and \(\text {dist}(Z^{\prime },Z)=3.5\mathrm {e}-3\) for the ordinary fault and dist(F,Z) = 2.6e − 3 and \(\text {dist}(Z^{\prime },Z)=1.9\mathrm {e}-3\) for the gradient fault.
7.2 Example 2
Consider the test function [11]
where (x,y) ∈ [0,1]2. This function has two ordinary faults that intersect each other at a point inside the domain (see the upper left-hand side of Fig. 8). Faults are exactly represented as
The upper-right panel of Fig. 8 shows the clouds of fault points detected by the algorithm. The lower-left panel shows the narrowed and the ordered points. Finally, at the lower-right panel, the exact and reconstructed curves using smooth spline interpolation are plotted.
7.3 Example 3
Consider the test function
where (x,y) ∈ [0,1]2. As shown in the upper-left side of Fig. 9, this function is discontinuous across six curves. In the upper-right side of Fig. 9, the clouds of fault points detected by the algorithm are shown. Narrowed and ordered points are depicted in the lower-left panel while exact and reconstructed curves are shown at the lower-right panel.
7.4 Example 4
In this example, we consider the function [12]
on [0,1]2 (see Fig. 10). This function has three gradient faults where one of them intersects two others. The given algorithm detects all fault curves and handles the intersections. For this example, we use the value CM = 1 instead of 1/4. The primary detected points, the narrowed cloud, the ordered points, and the exact and reconstructed curves are all depicted in Fig. 10.
7.5 Example 5
Consider the test function [12]
where (x,y) ∈ [0,1]2 (see Fig. 11). This function has a gradient and an ordinary faults which intersect to each other with a small angle. In Fig. 11, primary detected clouds, narrowed and ordered points, and exact and reconstructed curves are shown.
7.6 Example 6
In this example, we have a tangential intersection. Consider the test function [12]
where (x,y) ∈ [0,1]2 (see Fig. 12). This function has two gradient faults which are tangent to each other at the middle of the square. In the upper-right side of Fig. 12, the clouds of fault points detected by the algorithm are shown. The detected points around the intersection point are not separable for each fault; thus, the algorithm determines two intersection points and a common fault between them (see the narrowed and ordered points on the lower-left panel and the exact and reconstructed curves on the lower-right panel of Fig. 12).
7.7 Example 7
As a toy example, using closed parametric curve C(t) = x(t)i + y(t)j for t ∈ [0,2π) which represents a plane shape of a dolphin, we construct the test function
for (x,y) ∈ [0,1]2. Obviously, this function is discontinuous on C. We use N = 30000 Halton points as an initial set X. Results are given in Fig. 13 where the narrowed fault points and the exact and reconstructed curves are illustrated.
8 An application for solving conservation laws
In this section, an application of the presented fault detection method is expressed in the process of solving conservation law equations via the weighted essentially non-oscillatory (WENO) finite volume methods (FVM). A brief summary of solution of conservation law equations by WENO FVM is outlined here. The reader is refereed to [1, 10, 25, 30, 42, 47, 54] for a complete explanation.
8.1 Spatial disctretization
Consider the following problem of scalar conservation law
where \(u\equiv u\left (t,x \right ) : I\times {\varOmega } \longrightarrow \mathbb {R} \) is the solution of the problem. \({\varOmega } \subset \mathbb {R}^{2}\) is an open and bounded computational domain and \(I:=\left (0 , t_{f} \right ]\) is a time interval with final time tf. \(F\left (u \right ) :=\left (f_{1}\left (u \right ) , f_{2}\left (u \right ) \right )^{T}\) is called flux function. In order to discretize the problem (17) via FVM, a conforming triangulation \(\mathcal {T}=\left \lbrace T \right \rbrace _{T\in \mathcal {T}}\) is considered on Ω where T is a triangle (control volume) in this triangulation. The integral form of (17) on each triangle \(T\in \mathcal {T}\) at time t ∈ I is obtained as
where
is the cell average value of u on triangle T at time t. Here, the boundary of T is denoted by ∂T and consists of union of Γj for j = 1,2,3 where Γj are edges of triangle T with unit outward normal vectors nj. Thus (18) can be written as
The line integrals in the above equation can be approximated by a NG-point Gaussian integration formula as
where \( \omega _{\ell }^{(j)}\) are Gaussian weights and \(x_{\ell }^{(j)}\) are Gaussian points on edge Γj of triangle T. The Lax-Friedrichs numerical flux
is used here where \(u_{\text {in}}(\cdot ,x_{\ell }^{(j)})\) is the approximate solution at Gaussian point \(x_{\ell }^{(j)}\) of triangle T itself and \(u_{\text {out}}(\cdot ,x_{\ell }^{(j)})\) is the approximate solution at the same point but from an adjacent triangle which shares Γj as a common edge with T. The coefficient σ is obtained by
Therefore, cell average values \(\lbrace \overline {u}_{T}(t) \rbrace _{T\in \mathcal {T}}\) can be updated as
where
It is necessary to reconstruct uin and uout from the current cell average values \(\lbrace \overline {u}_{T}(t) \rbrace _{T\in \mathcal {T}}\). There are different methods for reconstruction that will be explained later.
8.2 Time discretization
For hyperbolic equations, an ODE solver which maintains the stability of the problem and avoids oscillations should be employed. Here we use a strong stability preserving Runge-Kutta (SSPRK) methods of order 3 [7, 26, 48]. Consider the time depended system (20). For moving from data \(\lbrace \overline {u}_{T}(t^{n}) \rbrace _{T\in \mathcal {T}} \) at time tn to data \(\lbrace \overline {u}_{T}(t^{n+1}) \rbrace _{T\in \mathcal {T}}\) with time length Δt, the SSPRK3 method consists of three steps
By applying the CFL condition, Δt is restricted as
where rT is the radius of the incircle of triangle T and \(\eta _{t}^{\max \limits }={\max \limits } | {F^{\prime }}(u)\cdot n |\), where maximum is being taken over all Gaussian points on edges of triangle T.
8.3 Reconstruction step
As discussed before, the approximation values uin and uout in (21) should be reconstructed from the cell average values \(\lbrace \overline {u}_{T}(t) \rbrace _{T\in \mathcal {T}}\) in each time step. An efficient particle reconstruction scheme based on polyharmonic spline interpolation is described [32]. For some other reconstruction methods, see, for example, [1, 42].
Assume that \(\lbrace x_{c_{T}} \rbrace _{T\in \mathcal {T}} \) is the set of barycenters of triangles in \(\mathcal {T}\). For each reference triangle \(T\in \mathcal {T}\), consider a stencil \(\mathcal {S}=\lbrace T_{1},\ldots , T_{n} \rbrace \subset \mathcal {T}\), where \(T\in \mathcal {S}\). In each triangle \(R\in \mathcal {S}\), the cell average value \(\overline {u}_{R}(t)\) is considered as an approximation for u at \(x_{c_{R}}\) at time t. So we are looking for a function s that interpolates u from the given values \(u(x_{c_{R}},t) \approx \overline {u}_{R}(t)\) for \(R \in \mathcal {S}\), i.e.,
In polyharmonic spline interpolation, s is written as
and by imposing the interpolation condition, the same system as (2) is resulted where f is replaced by the vector of cell average values at stencil \(\mathcal S\).
In order to avoid the nonphysical oscillations in solution, the WENO reconstruction is frequently used in literature [25, 30, 38, 54]. In WENO schemes, a weighted average of reconstructions from a set of stencils \(\{\mathcal S_{k}\}_{k=1}^{K}\) for which \(T\in \mathcal S_{k}\) for all k = 1,…,K is used. The weights are chosen in such way that the oscillations are minimized. We use a WENO reconstruction with an oscillation indicator parameter based on native space norm of the underlying polyharmonic kernel which is fully described in [32]. Details are left and the reader is referred to original sources.
8.4 Combination with the fault detection algorithm
In a WENO reconstruction, the process of selecting stencils \(\lbrace \mathcal {S}_{k}\rbrace _{k=1}^{K}\) for a \(T\in \mathcal {T}\) is an important part [22, 25, 29, 38]. In polyharmonic kernel reconstruction, the size of each stencil \(\mathcal {S}_{k}\) (the number of triangles in stencil \(\mathcal {S}_{k}\)) should not be less than Q, the dimension of polynomial space. Three types of stencils for each triangle T are introduced in [1]; centered, forward sector, and backward sector stencils. Three size 7 stencils of each type are displayed in Fig. 14.
It was concluded in [1] that in WENO polyharmonic spline reconstruction with \(\phi =\| \cdot \|_{2}^{2} \log (\| \cdot \|_{2})\), the use of 7 stencils (1 centered, 3 forward, and 3 backward) all of size 4 is sufficient for smooth solutions, while for solutions with discontinuity or a steep gradient at least 7 stencils (1 centered, 3 forward, and 3 backward), all of size 7 are required. For smooth solutions, we can even get rid of WENO and use a simple central stencil approximation instead. Usually the solution of a conservation law problem has a steep gradient or becomes discontinuous on a curve or a small subregion of global domain Ω. Thus, it is reasonable to detect such fault curves or regions in advance and use higher number of stencils or higher sizes in that regions only.
To follow this strategy, at each time step, \(X = \lbrace x_{c_{T}} \rbrace _{T\in \mathcal {T}}\) is considered as a set of scattered points in Ω and \(\lbrace \overline {u}_{T}(t) \rbrace _{T \in \mathcal {T}}\) as an approximation for solution values at these scattered points. Then, using the proposed fault detection algorithm with a Laplace indicator, a set of fault barycenters F = F(X,Δ,δ2) are detected where δ2 is defined in (11). As we discussed before, F(X,Δ,δ2) contains points close to both ordinary and gradient discontinuities. A triangle with its barycenter belongs to F is marked as a fault triangle. We use 7 stencils of size 7 for a fault triangle and a central stencil of size 7 otherwise. This means that the WENO reconstruction is used for that parts of the domain on which the solution has steep gradient and is going to be discontinuous. Other parts are subjected to a simple central stencil reconstruction. The algorithm of this hybrid method is as follows.
Numerical experiments show that this approach leads to an speedup of around 30% while retains the accuracy of solution compared to the original WENO reconstruction (see the next section for an example).
8.5 A numerical example
Consider the nonlinear Burger’s equation
where \(u:=u(t,x):[0,1]\times [-0.5,0.5]^{2}\longrightarrow \mathbb {R}\), with initial condition
where r0 = 0.15,c0 = (− 0.2,− 0.2) and with periodic boundary conditions. The initial condition is shown in Fig. 15. The solution evolves into a very steep gradient when advancing in time.
The MATLAB code of this part is also freely available at GitHub repository https://github.com/ddmirzaei/FaultDetection_Application to facilitate the reproduction of the results. All constants and parameters of the fault detection algorithm are set as before except parameter CM which is set to be 1 in this experiment. The previous value CM = 1/4 is also works but we use CM = 1 to reduce the number of fault triangles, slightly. Since the exact solution is not available, in Table 2, the errors between the solution of the full WENO reconstruction and the solution of the hybrid method for different values of triangles sizes \(h_{T}=\{\frac {1}{16},\frac {1}{32},\frac {1}{64},\frac {1}{128}\}\) are given. We observe a good agreement between the solutions of both methods which means that the hybrid method retains the accuracy of the full WENO reconstruction method. The next two columns of the table contain the total number of triangles and the average number of fault triangles in all time steps. For this example, and for mesh-sizes hT = 1/64,1/128, we observe that 4–5% of all triangles are detected as fault triangles. The total run times are given in the last two columns. We observe speedup of about 30% with the new hybrid method.
For a better illustration, numerical solutions using the hybrid method at time levels t = 0,0.3, t = 0.5, and t = 1 are shown in Fig. 15, and the fault barycenters at time t = 1 are shown in Fig. 16.
9 Conclusion
A localized scattered data approximation method based on polyharmonic spline interpolation in combination with partition of unity method was proposed to define a gradient and a Laplacian based indicators for detecting a cloud of fault points on or close to discontinuities of a bivariate function. The polyharmonic interpolation was done on scaled data points to prevent the instability of kernel matrices. To get an accurate reconstruction of the fault, a localized principal component regression was applied to generate a second set of points which are supposed to be closer to the fault curve than the primary detected set. Then an ordered subset of these narrowed points was extracted and a smooth parametric spline interpolation was employed to reconstruct the fault curve. Situations with multiple fault curves and special cases with intersections and multi-branch configurations were addressed. Finally, an application for solving conservation law PDEs was given.
Applications to other areas such as image processing and geosciences, and generalization to 3-variate functions are left for a future study.
Data availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
References
Aboiyar, T., Georgoulis, E.H., Iske, A.: Adaptive ADER methods using kernel-based polyharmonic spline WENO reconstruction. SIAM J. Sci. Comput. 32, 3251–3277 (2010)
Allasia, G., Besenghi, R., Cavoretto, R.: Adaptive detection and approximation of unknown surface discontinuities from scattered data. Simul. Model. Pract. Theory 17, 1059–1070 (2009)
Arandiga, F., Cohen, A., Donat, R., Dyn, N., Matei, B.: Approximation of piecewise smooth functions and images by edge-adapted (ENO-EA) nonlinear multiresolution techniques. Appl. Comput. Harmon. Anal. 24 (2), 225–250 (2008)
Arandiga, F., Cohen, A., Donat, R., Matei, B.: Edge detection insensitive to changes of illumination in the image. Image Vis. Comput. 28(4), 553–552 (2010)
Archibald, R., Gelb, A., Yoon, J.: Polynomial fitting for edge detection in irregularly sampled signals and images. SIAM J. Numer. Anal. 43(1), 259–279 (2005)
Arge, E., Floater, M.: Approximating scattered data with discontinuities. Numer. Algoritm. 8, 149–166 (1994)
Barth, T.J., Deconinck, H.: High order methods for computational physics. Springer, Berlin (1999)
Besenghi, R., Allasia, G.: Scattered data near-interpolation with applications to discontinuous surfaces. In: Curve and Surface Fitting, pp. 75–84, Nashville, TN. Vanderbilt University Press (2000)
Bhardwaj, S., Mittal, A.: A survey on various edge detector techniques. Procedia Technol. 4, 220–226 (2012)
Boscheri, W.: High order direct Arbitrary-Lagrangian–Eulerian (ALE) finite volume schemes for hyperbolic systems on unstructured meshes. SIAM J. Sci. Comput. 24, 7510–801 (2017)
Bozzini, M., Rossini, M.: The detection and recovery of discontinuity curves from scattered data. J. Comput. Appl. Math. 240, 148–162 (2013)
Bracco, C., Davydov, O., Giannelli, C., Sestini, A.: Fault and gradient fault detection and reconstruction from scattered data. Comput. Aided Geom. Des. 75, 101786 (2019)
Cates, D., Gelb, A.: Detecting derivative discontinuity locations in piecewise continuous functions from fourier spectral data. Numer. Algoritm. 46, 59–84 (2007)
Cavoretto, R., De Marchi, S., De Rossi, A., Santin, G.: Partition of unity interpolation using stable kernel-based techniques. Appl. Numer. Math. 116, 95–107 (2017)
Cavoretto, R., De Rossi, A.: Adaptive meshless refinement schemes for RBF - PUM collocation. Appl. Math. Lett. 90, 131–138 (2019)
Cavoretto, R., De Rossi, A., Perracchione, E.: Efficient computation of partition of unity interpolants through a block-based searching technique. Comput. Math. Appl. 71, 2568–2584 (2016)
Cavoretto, R., De Rossi, A., Perracchione, E.: Optimal selection of local approximants in RBF-PU, interpolation. J. Sci. Comput. 74, 1–22 (2018)
Ahmadi Darani, M.R.: The RBF partition of unity method for solving the Klein-Gordon equation. Engineering with Computers, In press (2020)
Davydov, O., Schaback, R.: Error bounds for kernel based numerical differentiation. Numer. Math. 132, 243–269 (2016)
Davydov, O., Schaback, R.: Optimal stencils in Sobolev spaces. IMA J. Numer. Anal. 39, 398–422 (2019)
Drake, K.P., Fuselier, E.J., Wright, G.B.: Implicit surface reconstruction with a curl-free radial basis function partition of unity method. SIAM J. Sci. Comput. 42, A3018–A3040 (2022)
Dumbser, M., Kaser, M.: Arbitrary high order non-oscillatory finite volume schemes on unstructured meshes for linear hyperbolic systems. J. Comput. Phys. 221, 693–723 (2007)
Farazandeh, E., Mirzaei, D.: A rational RBF interpolation with conditionally positive kernels, 47:74. Adv. Comput. Math. 47, 74 (2021)
Fasshauer, G.E.: Meshfree Approximations Methods with Matlab. World Scientific, Singapore (2007)
Friedrich, O.: Weighted essentially non-oscillatory schemes for the interpolation of mean values on unstructured grids. J. Comput. Phys. 144, 194–212 (1998)
Gottlieb, S., Shu, C.W.: Total variation diminishing Runge-Kutta schemes. Math. Comput. 67, 73–85 (1998)
Gout, C., Le Guyader, C.: Segmentation of complex geophysical structures with well data. Computional Geosci. 10, 361–372 (2006)
Gutzmer, T., Iske, A.: Detection of discontinuities in scattered data approximation. Numer. Algoritm. 16, 155–170 (1997)
Harten, A., Chakravarthy, S.R.: Multidimensional ENO schemes for general geometries. Tech. Rep., ICASE 221, 91–76 (1991)
Hu, C., Shu, C.W.: Weighted essentially non-oscillatory schemes on triangular meshes. J. Comput. Phys. 150, 97–127 (1999)
Iske, A.: On the approximation order and numerical stability of local Lagrange interpolation by polyharmonic splines . In: International Series of Numerical Mathematics 145, pp. 153–165. Basel, Birkhäuser Verlag (2003)
Iske, A.: On the construction of kernel-based adaptive particle methods in numerical flow simulation. In: Notes on Numerical Fluid Mechanics and Multidisciplinary Design (NNFM), pp. 197–221, Berlin, Springer (2013)
Jabalameli, M., Mirzaei, D.: A weak-form RBF-generated finite difference method. Comput. Math. Appl. 79, 2624–2643 (2020)
Jeffers, J.: Two case studies in the application of principal component. Appl. Stat. 16, 225–236 (1967)
Jolliffe, I.T.: Principal Component Analysis, 2nd. Springer, New York (2002)
Jung, J. H., Durante, V. R.: An iterative adaptive multiquadric radial basis function method for the detection of local jump discontinuities. Appl. Numer. Math. 59(7), 1449–1466 (2009)
Jung, J.H., Gottlieb, S., Kim, S.O.: Iterative adaptive RBF methods for detection of edges in two-dimensional functions. Appl. Numer. Math. 61(1), 77–91 (2011)
Kaser, M., Iske, A.: ADER schemes on adaptive triangular meshes for scalar conservation laws. J. Comput. Phys. 205, 486–508 (2005)
Kendall, M.G.: A Course in Multivariate Analysis. Griffin, London (1957)
Larsson, E., Shcherbakov, V., Heryudono, A.: A least squares radial basis function partition of unity method for solving PDEs. SIAM J. Sci. Comput. 39, A2538–A2563 (2017)
Lee, I.K.: Curve reconstruction from unorganized points. Comput. Aided Geom. Des. 17, 161–177 (2000)
LeVeque, R.J.: Finite Volume Methods for Hyperbolic Problems. Cambridge University Press, Cambridge (2002)
Mirzaei, D.: The direct radial basis function partition of unity (d-RBF-PU) method for solving PDEs. SIAM J. Sci. Comput. 43, A54–A83 (2021)
Romani, L., Rossini, M., Schenone, D.: Edge detection methods based on rbf interpolation. J. Comput. Appl. Math. 349, 532–547 (2019)
Safdari-Vaighani, A., Heryudono, A., Larsson, E.: A radial basis function partition of unity collocation method for convection–diffusion equations arising in financial applications. J. Sci. Comput. 64(2), 341–367 (2015)
Shankar, V.: The overlapped radial basis function-finite difference (RBF-FD) method: a generalization of RBF-FD. J. Comput. Phys. 342, 211–228 (2017)
Shu, C.W.: High order ENO and WENO schemes for computational fluid dynamics. In: High Order Methods for Computational Physics, pp. 439–852, Berlin, Springer (1991)
Shu, C.W., Osher, S.: Efficient implementation of essentially non-oscillatory shock capturing schemes. J. Comput. Phys. 77, 439–471 (1998)
Singh, S., Singh, R.: Comparison of various edge detection techniques. In: 2nd International Conference on Computing for Sustainable Global Development, pp. 393–396 (2015)
Sober, B., Levin, D.: Manifold approximation by moving least squares projection (MMLS). Constr. Approx. 52, 433–478 (2020)
Strang, G.: Linear Algebra and Learning from Data. Wellesley-Cambridge Press, Cambridge (2019)
Wendland, H.: Fast evaluation of radial basis functions: methods based on partition of unity. In: Approximation Theory, X: Wavelets, Splines, and Applications, pp. 473–483. Nashville, TN, Vanderbilt University Press (2002)
Wendland, H.: Scattered Data Approximation. Cambridge University Press, Cambridge (2005)
Wolf, W.R., Azevedo, J.L.F.: High-order ENO and WENO schemes for unstructured grids. Int. J. Numer. Methods Fluids 55, 917–943 (2007)
Yi, S., Labate, D., Easley, G.R., Krim, H.: A shearlet approach to edge analysis and detection. IEEE Trans. Image Process. 18(5), 929–941 (2009)
Acknowledgements
We wish to express our deep gratitude to anonymous reviewers for their helpful comments which improved the quality of the paper.
Funding
Open access funding provided by Uppsala University. The work of the second author (Navid Soodbakhsh) was supported by a grant from Iranian National Science Foundation (INSF), No. 98001906.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mirzaei, D., Soodbakhsh, N. A fault detection method based on partition of unity and kernel approximation. Numer Algor 93, 1759–1794 (2023). https://doi.org/10.1007/s11075-022-01488-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-022-01488-4
Keywords
- Partition of unity
- Radial basis functions
- Polyharmonic splines
- Fault curves
- Fault points
- Principal component regression