1 Introduction

This paper provides simple formulations for integrating over manifolds of codimensions one, or two in \(\mathbb {R}^3\), when the manifolds are described by functions that map points in \(\mathbb {R}^n\) (\(n=2,3\)) to their closest points on curves or surfaces using the Euclidean distance. The idea for the present work originated in [10] where the authors proposed a formulation for computing integrals of the form

$$\begin{aligned} \int _{\partial \Omega }v(\mathbf {x}(s))\, \mathrm{d}s, \end{aligned}$$

in the level set framework, namely when the domain \(\Omega \) is represented implicitly by the signed distance function to its boundary \(\partial \Omega \). Typically in a level set method [15, 16, 21], to evaluate an integral of the form of (1) where \(\partial \Omega \) is the zero level set of a continuous function \(\varphi \), it is necessary to extend the function v defined on the boundary \(\partial \Omega \) to a neighborhood in \(\mathbb {R}^{n}\). The extension of v, denoted \(\tilde{v}\), is typically a constant extension of v. The integral is then approximated by an integral involving a regularized Dirac-\(\delta \) function concentrated on \(\partial \Omega \), namely

$$\begin{aligned} \int _{\partial \Omega }v(\mathbf {x}(s))\mathrm{d}s\approx \int _{\mathbb {R}^{n}}\tilde{v}(\mathbf {x})\delta _{\epsilon }(\varphi (\mathbf {x}))|\nabla \varphi (\mathbf {x})|\mathrm{d}\mathbf {x}. \end{aligned}$$

Various numerical approximations of this delta function have been proposed, see e.g., [4, 5, 22, 24, 27].

In [10], with the choice of \(\varphi =d_{\partial \Omega }\) being a signed distance function to \(\partial \Omega \), the integral (1) is expressed as an average of integrals over nearby level sets of \(d_{\partial \Omega }\), where these nearby level sets continuously sweep a thin tubular neighborhood around the boundary \(\partial \Omega \) of radius \(\epsilon \). Consequently, (1) is equivalent to the volume integral shown on the right hand side below:

$$\begin{aligned} \int _{\partial \Omega }v(\mathbf {x}(s))\mathrm{d}s=\int _{\mathbb {R}^{n}}v(\mathbf {x}^{*})J(\mathbf {x};d_{\partial \Omega })\delta _{\epsilon }(d_{\partial \Omega }(\mathbf {x}))\mathrm{d}\mathbf {x}, \end{aligned}$$

where \(\delta _{\epsilon }\) is an averaging kernel, \(\mathbf {x}^{*}\) is the closest point on \(\partial \Omega \) to \(\mathbf {x}\) and \(J(\mathbf {x};d_{\partial \Omega })\) accounts for the change in curvature between the nearby level sets and the zero level set.

Now suppose that \(\partial \Omega \) is a smooth hypersurface in \(\mathbb {R}^3\) and assume that \(\mathbf {x}\) is sufficiently close to \(\Omega \) so that the closest point mapping

$$\begin{aligned} \mathbf {x}^*=P_{\partial \Omega }(\mathbf {x})=\mathrm {argmin}_{y\in \partial \Omega } |\mathbf {x}-\mathbf {y}| \end{aligned}$$

is continuously differentiable. Then the restriction of \(P_{\partial \Omega }\) to \(\partial \Omega _\eta \) is a diffeormorphism between \(\partial \Omega _\eta \) and \(\partial \Omega \), where \(\partial \Omega _{\eta } := \left\{ \mathbf {x} : d_{\partial \Omega } (\mathbf {x}) = \eta \right\} \). As a result, it is possible to write integrals over \(\partial \Omega \) using points on \(\partial \Omega _\eta \) as:

$$\begin{aligned} \int _{\partial \Omega } v(\mathbf {x}) \mathrm{d}S = \int _{\partial \Omega _\eta } v(\mathbf {x}^*)J(\mathbf {x};\eta )\mathrm{d}S, \end{aligned}$$

where \(J(\mathbf {x},\eta )\) comes from the change of variable defined by \(P_{\partial \Omega }\) restricted on \(\partial \Omega _\eta \). Averaging the above integrals respectively with a kernel, \(\delta _\epsilon \), compactly supported in \([-\epsilon ,\epsilon ]\), we obtain

$$\begin{aligned} \int _{\partial \Omega } v(\mathbf {x}) \mathrm{d}S = \int _{-\epsilon }^{\epsilon } \delta _\epsilon (\eta ) \int _{\partial \Omega _\eta } v(\mathbf {x}^*)J(\mathbf {x};\eta )\mathrm{d}S~\mathrm{d}\eta . \end{aligned}$$

Formula (2) then follows from the coarea formula [7] applied to the integral on the right hand side.

In the following section, we show that in three dimensions, the Jacobian J in (2) is the product of the first two singular values, \(\sigma _1\) and \(\sigma _2\), of the Jacobian matrix of the closest point mapping \(P'_{\Gamma }\); namely,

$$\begin{aligned} \int _{\partial \Omega }v(\mathbf {x}(s))\mathrm{d}s=\int _{\mathbb {R}^{3}}v(P_{\partial \Omega }(\mathbf {x}))\delta _{\epsilon }(d_{\partial \Omega }(\mathbf {x}))\prod _{j=1}^{2} \sigma _j(\mathbf {x}) \mathrm{d}\mathbf {x}. \end{aligned}$$

To motivate the new approach using singular values, we consider Cartesian coordinate systems with the origin placed on points sufficiently close to the surface, and the z direction normal to the surface. Thus the partial derivatives of the closest point mapping in the z direction will yield zero and the partial derivatives in the other two directions naturally correspond to differentiation in the tangential directions. Thus we see that one of the singular values should be 0 while the other two are related to the surface area element. We also derive a similar formula for integration along curves in three dimensions (codimension 2). The advantages of this new formula include the ease for constructing higher-order approximations of J via, e.g., simple differencing, even in neighborhoods of surface boundaries where curvatures become unbounded.

This paper is motivated by the recent success in the closest point methods and the Dynamic Surface Extension method [23], for evolving interfaces and solving partial differential equations on surfaces [1113, 19], by the need to process data sets that contain unstructured points sampled from some underlying surfaces, and targets applications where manifolds are not defined by patches of explicit parameterizations and may evolve drastically due to some coupled processes; see, e.g., free boundary problems [9]. Our work provides a convenient way to formulate boundary integral methods in such applications without conversion to local parameterizations. If the manifolds are defined by explicit parameterizations, it is natural and typically more accurate to use conventional methods such as Nyström methods using quadratures on the parameter space or Boundary Element Methods with weak formulations, see, e.g., [1]. Additionally, for applications involving fluid–structure interactions, we mention the immersed boundary method which involves accurate discretizations of Dirac delta measures [14, 17].

Closest point mappings are easily computed in the context of level set methods [16] since there exist fast algorithms for constructing distance functions from level set functions [2, 18, 20, 25, 26]. More precisely,

$$\begin{aligned} P_{\partial \Omega }(\mathbf {x})=\mathbf {x}-d_{\partial \Omega }(\mathbf {x})\nabla d_{\partial \Omega }(\mathbf {x}). \end{aligned}$$

Our previous work [10] as well as this current paper provide a simple framework for constructing numerical schemes for boundary integral methods when the interface is described implicitly by a level set function, and is intended for use in such context.

Finally, closest point mappings can also be computed easily from dense and unorganized point sets that are acquired directly from an imaging device (e.g., LIDAR). This paper lays the foundation of a numerical scheme for computing integrals over surfaces sampled by unstructured point clouds.

2 Integration using the closest point mapping

In this section, we relate the Jacobian J in (2) to the singular values of the Jacobian matrix of the closest point mapping from \(\mathbb {R}^2\) or \(\mathbb {R}^3\) to \(\Gamma \), where \(\Gamma \) denotes the curves or surfaces on which integrals are defined. We assume that in three dimensions, if \(\Gamma \) is not closed, it has smooth boundaries. For clarity of the exposition in the rest of the paper, we will now denote the distance function simply by d.

2.1 Codimension 1

We consider a \(C^{2}\) compact curve or surface \(\Gamma \) that can either be closed or not. If \(\Gamma \) is closed, then it is the boundary of a domain \(\Omega \) so that \(\Gamma \) can be denoted \(\partial \Omega \). If \(\Gamma \) is not closed, we assume that it has smooth boundaries. We define \(d:\mathbb {R}^{n}\mapsto \mathbb {R}\)  to be the distance function to \(\Gamma \) and \(P_{\Gamma }\) to be the closest point mapping \(P_{\Gamma }:\mathbb {R}^{n}\mapsto \Gamma \) (for \(n=2,3\)) defined as

$$\begin{aligned} |P_{\Gamma }(\mathbf {x})-\mathbf {x}|=\min _{\mathbf {y}\in \Gamma }|\mathbf {y}-\mathbf {x}|. \end{aligned}$$

We let \(d_{0}\) be the distance function to \(\Gamma \) if it is open and \(d_{s}\) be the signed distance function to \(\Gamma =\partial \Omega \) if it is closed. The signed distance function is defined as

$$\begin{aligned} d_{s}(\mathbf {x}):={\left\{ \begin{array}{ll} \inf _{\mathbf {y}\in \Omega ^{c}}|\mathbf {x}-\mathbf {y}| &{} \text{ if } \mathbf {x}\in \Omega ,\\ -\inf _{\mathbf {y}\in \Omega }|\mathbf {x}-\mathbf {y}| &{} \text{ if } \mathbf {x}\in \bar{\Omega }^{c}. \end{array}\right. } \end{aligned}$$

Then we define d as follows:

$$\begin{aligned} d(\mathbf {x}):={\left\{ \begin{array}{ll} d_{0}(\mathbf {x}) &{} \text{ if } \Gamma \text{ is } \text{ open, }\\ d_{s}(\mathbf {x}) &{} \text{ if } \Gamma \text{ is } \text{ closed. } \end{array}\right. } \end{aligned}$$

The following lemma provides a concise expression of the Gaussian curvature in terms of the distance function. This is probably a known result but we include its proof to preserve the completeness of the paper.

Lemma 1

Let d be the distance function to \(\Gamma \) defined in (5). For \(|\eta |\) sufficiently close to 0, the Gaussian curvature at a point on the \(\eta \) level set \(\Gamma _{\eta } := \left\{ \xi :d(\xi )=\eta \right\} \) can be expressed as

$$\begin{aligned} G_{\eta }=d_{xx}d_{yy}+d_{xx}d_{zz}+d_{yy}d_{zz}-d_{xy}^{2}-d_{xz}^{2}-d_{yz}^{2}. \end{aligned}$$


Starting with the definition of the Gaussian curvature G for a surface (see [8]), we can obtain an expression for the Gaussian curvature of its \(\eta \)-level set in terms of d as

$$\begin{aligned} G&=\langle \nabla d,\mathrm{adj}(\mathrm{Hess}(d))\nabla d\rangle \nonumber \\&=d_{x}^{2}(d_{yy}d_{zz}-d_{yz}^{2})+d_{y}^{2}(d_{xx}d_{zz}-d_{xz}^{2})+d_{z}^{2}(d_{xx}d_{yy}-d_{xy}^{2})\nonumber \\&\quad +2[d_{x}d_{y}(d_{xz}d_{yz}-d_{xy}d_{zz})+~d_{y}d_{z}(d_{xy}d_{xz}-d_{yz}d_{xx})+d_{x}d_{z}(d_{xy}d_{yz}-d_{xz}d_{yy})].\nonumber \\ \end{aligned}$$

We show that this expression is the same as (6) by rearranging the terms above and using the fact that close to \(\Gamma \) the distance function satisfies \(|\nabla d| = 1\). First we rearrange the terms in G:

$$\begin{aligned} G= & {} d_{x}^{2}d_{yy}d_{zz}+d_{y}^{2}d_{xx}d_{zz}+d_{z}^{2}d_{xx}d_{yy}-d_{x}^{2}d_{yz}^{2}-d_{y}^{2}d_{xz}^{2}-d_{z}^{2}d_{xy}^{2}\\&+ \, 2[d_{x}d_{y}(d_{xz}d_{yz}-d_{xy}d_{zz})+~d_{y}d_{z}(d_{xy}d_{xz}-d_{yz}d_{xx})+d_{x}d_{z}(d_{xy}d_{yz}-d_{xz}d_{yy})], \end{aligned}$$

and rewrite each of the first six terms in terms of \(|\nabla d|^{2}\), e.g.,

$$\begin{aligned} d_{x}^{2}d_{yy}d_{zz}=\underbrace{|\nabla d|^{2}}_{=1}d_{yy}d_{zz}-d_{y}^{2}d_{yy}d_{zz}-d_{z}^{2}d_{yy}d_{zz}=d_{yy}d_{zz}-d_{y}^{2}d_{yy}d_{zz}-d_{z}^{2}d_{yy}d_{zz}. \end{aligned}$$

Thus we have

$$\begin{aligned}&d_{x}^{2}d_{yy}d_{zz}+d_{y}^{2}d_{xx}d_{zz}+d_{z}^{2}d_{xx}d_{yy}-d_{x}^{2}d_{yz}^{2}-d_{y}^{2}d_{xz}^{2}-d_{z}^{2}d_{xy}^{2}\nonumber \\&\quad = \underbrace{d_{xx}d_{yy}+d_{xx}d_{zz}+d_{yy}d_{zz}-d_{xy}^{2}-d_{xz}^{2}-d_{yz}^{2}}_{=G_\eta }-d_{y}^{2}d_{yy}d_{zz}-d_{z}^{2}d_{yy}d_{zz}\nonumber \\&\qquad -\,d_{x}^{2}d_{xx}d_{zz}-d_{z}^{2}d_{xx}d_{zz}-d_{y}^{2}d_{xx}d_{yy}-d_{x}^{2}d_{xx}d_{yy}\nonumber \\&\qquad +\,d_{y}^{2}d_{yz}^{2}+d_{z}^{2}d_{yz}^{2}+d_{x}^{2}d_{xz}^{2}+d_{z}^{2}d_{xz}^{2}+d_{x}^{2}d_{xy}^{2}+d_{y}^{2}d_{xy}^{2} \end{aligned}$$

Using (8) and rearranging the rest of the terms in (7) we obtain \(G=G_{\eta }\). \(\square \)

Proposition 2

Consider a \(C^{2}\) compact surface \(\Gamma \subset \mathbb {R}^{n}\) (\(n=2,3\)) of codimension 1 and let d be defined as in (5). Define the closest point projection map \(P_{\Gamma }\) as in (4) for \(\mathbf {x}\in \mathbb {R}^{n}\). For \(|\eta |\) sufficiently close to zero, let \(\Gamma _{\eta }\) be the \(\eta \) level set of d

$$\begin{aligned} \Gamma _{\eta }:=\left\{ \mathbf {x}:d(\mathbf {x})=\eta \right\} . \end{aligned}$$

Define the Jacobian \(J_{\eta }\) as

$$\begin{aligned} J_{\eta }:=\left\{ \begin{array}{l@{\quad }l} 1+\eta \kappa _{\eta } &{} \mathrm{if } \,\, n=2,\\ 1+2\eta H_{\eta }+\eta ^{2}G_{\eta } &{} \mathrm{if } \,\, n=3, \end{array}\right. \end{aligned}$$

where \(\kappa _{\eta }\) is the signed curvature of \(\Gamma _{\eta }\) in 2D, and \(H_{\eta }\) and \(G_{\eta }\) are its Mean curvature and Gaussian curvature respectively in 3D.

Then if \(P'_{\Gamma } \) is the Jacobian matrix of \(P_{\Gamma }\) we have

$$\begin{aligned} J_{\eta }=\left\{ \begin{array}{l@{\quad }l} \sigma _{1}, &{} n=2,\\ \sigma _{1}\sigma _{2}, &{} n=3, \end{array}\right. \end{aligned}$$

where \(\sigma _1, \sigma _2\) are the first two singular values of the Jacobian matrix \(P'_{\Gamma }\).


The distance function d satisfies the property \(d(\mathbf {x})=0\) for \(\mathbf {x}\in \Gamma .\) Also, since \(\Gamma \) is \(C^{2}\), its distance function d belongs to \(C^{2}(\mathbb {R}^{n},\mathbb {R})\); see, e.g., [3, 6]. It follows that the order of the mixed partial derivatives does not matter. In addition, the normals to a smooth interface do not focus right away so that the distance function is smooth in a tubular neighborhood T around \(\Gamma \), and is linear with slope one along the normals. Therefore, we have

$$\begin{aligned} |\nabla d|=1\quad \text{ for } \text{ all } \mathbf {x}\in T. \end{aligned}$$

The third important fact is that the Laplacian of d at a point \(\mathbf {x}\) gives (up to a constant related to the dimension) the mean curvature of the isosurface of d passing through \(\mathbf {x}\), namely

$$\begin{aligned} \Delta d(\mathbf {x})=(1-n)H(\mathbf {x}), \end{aligned}$$

where \(H(\mathbf {x})\) is the Mean curvature of the level set \(\left\{ \mathbf {y}:d(\mathbf {y})=d(\mathbf {x})\right\} \). Differentiating (11) with respect to each variable gives the following equations in three dimensions:

$$\begin{aligned} d_{x}d_{xx}+d_{y}d_{xy}+d_{z}d_{xz}=0,\end{aligned}$$
$$\begin{aligned} d_{x}d_{yx}+d_{y}d_{yy}+d_{z}d_{yz}=0,\end{aligned}$$
$$\begin{aligned} d_{x}d_{zx}+d_{y}d_{zy}+d_{z}d_{zz}=0. \end{aligned}$$

In particular the two-dimensional case can be derived by assuming that the distance function is constant in z.

Two dimensions. In that case the Jacobian matrix \(P'_{\Gamma }\) of the closest point projection map is

$$\begin{aligned} P'_{\Gamma }=\left( \begin{array}{c@{\quad }c} 1-d_{x}^{2}-dd_{xx} &{} -(d_{y}d_{x}+dd_{yx})\\ -(d_{x}d_{y}+dd_{xy}) &{} 1-d_{y}^{2}-dd_{yy} \end{array}\right) . \end{aligned}$$

Since Schwartz’ Theorem holds, we have \(d_{xy}=d_{yx}\) making \(P'_{\Gamma }\) a real symmetric matrix. It is therefore diagonalizable with eigenvalues 0 and \(1-d\Delta d\). Indeed, we have

$$\begin{aligned} P'_{\Gamma }\nabla d= & {} \left( \begin{array}{c} d_{x}(\underbrace{1-d_{x}^{2}-d_{y}^{2}}_{=0 \text{ by } (11) \text{ in } \text{2D }})-d(\underbrace{d_{x}d_{xx}+d_{y}d_{yx}}_{=0 \text{ by } (13) \text{ in } \text{2D }})\\ \\ d_{y}(\underbrace{1-d_{x}^{2}-d_{y}^{2}}_{=0 \text{ by } (11) \text{ in } \text{2D }})-d(\underbrace{d_{y}d_{yy}+d_{x}d_{xy}}_{=0 \text{ by } (14) \text{ in } \text{2D }}) \end{array}\right) =\mathbf {0}, \end{aligned}$$

and for \(\mathbf {v}=\left( \begin{array}{c} -d_{y}\\ d_{x} \end{array}\right) \),

$$\begin{aligned} P'_{\Gamma }\mathbf {v}= & {} \left( \begin{array}{c} -d_{y}+d_{y}d_{x}^{2}+d_{y}dd_{xx}-d_{x}^{2}d_{y}-dd_{x}d_{xy}\\ d_{y}^{2}d_{x}+d_{y}dd_{xx}-d_{y}^{2}d_{x}-d_{x}dd_{yy} \end{array}\right) \\= & {} \left( \begin{array}{c} -d_{y}\\ d_{x} \end{array}\right) +d\left( \begin{array}{c} d_{y}d_{xx}-d_{x}d_{xy}\\ d_{y}d_{xy}-d_{x}d_{yy} \end{array}\right) \\ \\= & {} v+d\left( \begin{array}{c} -\Delta d(-d_{y})-\underbrace{(d_{y}d_{yy}+d_{x}d_{xy})}_{=0 \text{ by } (14) \text{ in } \text{2D }}\\ \\ -\Delta d(dx)+\underbrace{d_{x}d_{xx}+d_{y}d_{xy}}_{=0 \text{ by } (13) \text{ in } \text{2D }} \end{array}\right) \\= & {} (1-d\Delta d)\mathbf {v}. \end{aligned}$$

Since \(||\mathbf {v}||=1\), \(\mathbf {v}\) is an eigenvector corresponding to the eigenvalue \(\lambda =1-d\Delta d\). Thus, for \(\mathbf {x}\) such that \(d(\mathbf {x})=\eta \) we have that the eigenvalue \(\lambda \) of \(P'_{\Gamma }\) satisfies

$$\begin{aligned} \lambda =1-\eta \Delta d=1+\eta \kappa _{\eta } \end{aligned}$$

by (12). Since \(1+\eta \kappa _{\eta }\ge 0\), it follows that \(\lambda \) coincides with the singular value of \(P'_{\Gamma }\) and hence

$$\begin{aligned} \sigma _1=1+\eta \kappa _{\eta }. \end{aligned}$$

Three dimensions. Since for \(|\eta |\) sufficiently close to 0 the distance function is \(C^2\), the Jacobian matrix

$$\begin{aligned} P'_{\Gamma }=\left( \begin{array}{c@{\quad }c@{\quad }c} 1-d_{x}^{2}-dd_{xx} &{} -(d_{y}d_{x}+dd_{yx}) &{} -(d_{z}d_{x}+dd_{zx})\\ -(d_{x}d_{y}+dd_{xy}) &{} 1-d_{y}^{2}-dd_{yy} &{} -(d_{z}d_{y}+dd_{zy})\\ -(d_{x}d_{z}+dd_{xz}) &{} -(d_{y}d_{z}+dd_{yz}) &{} 1-d_{z}^{2}-dd_{zz} \end{array}\right) , \end{aligned}$$

is a real symmetric matrix which is diagonalizable with one zero eigenvalue and two other eigenvalues \(\lambda _{1}\) and \(\lambda _{2}\). Indeed using (13), (14), (15) and (11) we can show that

$$\begin{aligned} P'_{\Gamma }\nabla d=\mathbf {0}. \end{aligned}$$

Now consider \(\mathbf {x}\) such that \(d(\mathbf {x)}=\eta \). Then, the characteristic polynomial \(\chi (\lambda )\) of \(P'_{\Gamma }\) is

$$\begin{aligned} \chi (\lambda )=-\lambda \left( \lambda ^{2}-(2-\eta \Delta d)\lambda -Q\right) , \end{aligned}$$

where \(Q=-G_\eta \eta ^{2}+\eta \Delta d-1\) with \(G_\eta \) defined in (6). Since the other two eigenvalues of \(P'_{\Gamma }\) are the solutions of the quadratic equation \(\lambda ^{2}~-~(2~-~\eta \Delta d)\lambda ~-~Q~=~0\), it follows that

$$\begin{aligned} \lambda _{1}\lambda _{2}=-Q=1-\eta \Delta d+\eta ^{2}G_\eta =1+2\eta H_{\eta }+\eta ^{2}G_{\eta }. \end{aligned}$$

Since \(1+2\eta H_{\eta }+\eta ^{2}G_{\eta }\ge 0\), it follows that

$$\begin{aligned} \sigma _{1}\sigma _{2}=1+2\eta H_{\eta }+\eta ^{2}G_{\eta }, \end{aligned}$$

where \(\sigma _{1}\) and \(\sigma _{2}\) are singular values of \(P'_{\Gamma }\). \(\square \)

This leads to the following proposition:

Fig. 1
figure 1

Level set of a 2D open curve. An example of an open curve \(\Gamma \) (black curve) and its \(\eta \)-level set \(\Gamma _{\eta }\) (red curve). \(\Gamma _{\eta }\) consists of a tubular part and two semi-circles at the two ends

Theorem 3

Consider \(\Gamma \) a curve in 2D or surface in 3D with \(C^{2}\) boundaries if it is not closed, and define \(d:\mathbb {R}^{n}\mapsto {\mathbb {R}}\) (\(n=2,3)\) to be the distance function to \(\Gamma \) with \(P_{\Gamma }:\mathbb {R}^{n}\mapsto \Gamma \) the closest point mapping to \(\Gamma \). Then for \(\epsilon \max _{x \in \Gamma } |\kappa (x)| < 1\) for any \(\kappa (x)\) principal curvatures of \(\Gamma \) at x, we have

$$\begin{aligned} \int _{\Gamma }v(\mathbf {x)}\mathrm{d}\mathbf {x}=\int _{\mathbb {R}^{n}}v(P_{\Gamma }(\mathbf {x})\delta _{\epsilon }(\mathrm{d}(\mathbf {x}))\Sigma (\mathbf {x})\mathrm{d}\mathbf {x}, \end{aligned}$$

where \(\delta _{\epsilon }\) is an averaging kernel and \(\Sigma (\mathbf {x})\)is defined as

$$\begin{aligned} \Sigma (\mathbf {x})=\left\{ \begin{array}{l@{\quad }l} \sigma _{1}(\mathbf {x}), &{} n=2,\\ \sigma _{1}(\mathbf {x})\sigma _{2}(\mathbf {x}), &{} n=3, \end{array}\right. \end{aligned}$$

where \(\sigma _{j}(\mathbf {x})\) , \(j=1,2,\) is the j-th singular value of the Jacobian matrix \(\displaystyle P'_{\Gamma }\) evaluated at \(\mathbf {x}.\)


If \(\Gamma \) is closed we combine Eq. (2) with the result \(J(\mathbf {x})=\Sigma (\mathbf {x})\) from Eq. (10) of Proposition 2.

If \(\Gamma \) is open there is a little more to show since Eq.  (2) was only derived for closed manifolds. Before we state the result, it is necessary to understand how \(\Gamma _{\eta }\) defined in (9) (an \(\eta -\)level set of d) looks like for an open curve in two dimensions and for a surface with boundaries in three dimensions.

In two dimensions, \(\Gamma _{\eta }\) consists of a flat tubular part on either side of the curve and two semi-circles at the two ends of the curve. See Fig. 1.

In three dimensions \(\Gamma \) is in general made up of three distinct parts: the interior part, the edges of the boundary and the corners. If we assume that \(\Gamma \) has N edges then we can write \(\Gamma =\Gamma ^{o}\cup (\cup _{i=1}^{N}E_{i})\cup (\cup _{i=1}^{N}C_{i})\), where \(\Gamma ^{o}\) is the interior of \(\Gamma \), \(E_{i}\) is the i-th edge of the boundary of \(\Gamma \) and \(C_{i}\) is its i-th corner. In that setting we can write \(\Gamma _{\eta }=I_{\eta }\cup (\cup _{i=1}^{N}T_{i}^{\eta })\cup (\cup _{i=1}^{N}S_{i}^{\eta })\), where \(I_{\eta }\) is the inside portion of \(\Gamma _{\eta }\), \(T_{i}^{\eta }\) is the cylindrical part of \(\Gamma _{\eta }\) representing the set of points located at a distance \(\eta \) from the i-th edge \(E_{i}\), and finally \(S_{i}^{\eta }\) is the spherical part of \(\Gamma _{\eta }\) representing the set of points located at a distance \(\eta \) from the i-th corner \(C_{i}\). See Fig. 2.

In both cases, we need to integrate over \(\Gamma _{\eta }\) and then subtract the two semi-circles at the two end points of the curve (in two dimensions) or subtract the portions of sphere at the corners of the surface and the portions of cylinders at the edges of the surface (in three dimensions). However, it turns out that the subtraction is unnecessary since \(\Sigma (\mathbf {x})=0\) on each of the subtracted pieces as shown below.

Two dimensions. On the semi-circle around the end point of a curve, the closest point mapping is constant since all points on the semi-circle \(\Gamma _{\eta }\) map to the end point. As a result, the singular values of the Jacobian matrix of the closest point mapping are all zeros and thus \(\Sigma (\mathbf {x})=0\) on the semi-circles around the end points of a curve.

Three dimensions. As in two dimensions, on the portions of sphere around a corner point of a surface, the closest point mapping is constant and thus \(\Sigma (\mathbf {x})=0\). On the portion of cylinders, the closest point mapping is constant along the radial dimension (one of the principal directions or singular vector) resulting of the singular value along that direction to be zero. Since \(\Sigma (\mathbf {x})\) is the product of the singular values, it follows that \(\Sigma (\mathbf {x})=0\) on the portion of cylinders as well. Consequently, Eq. (16) holds for any \(C^{2}\) curve or surface with \(C^2\) boundaries of codimension 1. \(\square \)

Fig. 2
figure 2

Level set of a 3D surface with boundaries. An example of a surface with boundaries viewed from different angles and its corresponding \(\eta \)-level set \(\Gamma _{\eta }\) viewed from the same angles. The figure at the bottom right corner shows the surface and \(\Gamma _{\eta }\)

2.2 Codimension 2

We consider a \(C^{2}\) curve in \(\mathbb {R}^{3}\) denoted by \(\Gamma \) and let \(\gamma (s)\) be a parameterization by arclength of \(\Gamma .\) We denote by \(d:\mathbb {R}^{3}\mapsto {\mathbb {R}^{+}\cup \{0\}}\) the distance function to \(\Gamma \) and let \(P_{\Gamma }:\mathbb {R}^{3}\mapsto \Gamma \) be the closest point mapping to \(\Gamma \). We consider a parameterization of the tubular part of the level surface for \(\eta \in [0,\epsilon ]\) defined as

$$\begin{aligned} \mathbf {x}(s,\theta ,\eta ):\gamma (s)+\eta \cos \theta \vec {\mathbf {N}}(s)+\eta \sin \theta \vec {\mathbf {B}}(s), \end{aligned}$$

where \(\vec {\mathbf {T}}=\frac{d\gamma }{ds}\), \(\vec {\mathbf {N}}\) and \(\vec {\mathbf {B}}\) constitute the Frenet frame for \(\gamma \) as illustrated in Fig. 3.

Fig. 3
figure 3

Level set of an open curve in 3D. Three dimensional curve with its \(\eta \)-level surface \(\Gamma _{\eta }\) in green and the Frenet frame at a point on \(\Gamma _{\eta }\)

If we project a point \(\mathbf {x}\) on the tubular part of the level surface \(\Gamma _{\eta }\) defined in (9), we have \(P_{\Gamma }(\mathbf {x(s,\theta ,\eta ))=\gamma (s)}\). If L is the length of the curve it follows that

$$\begin{aligned} \int _{0}^{2\pi }\int _{0}^{L}g(P_{\Gamma }(\mathbf {x(s,\theta ,\eta )))|\mathbf {x_{s}\times \mathbf {x_{\theta }}}|}\mathrm{d}s\mathrm{d}\theta \!&=\!\int _{0}^{2\pi }\int _{0}^{L}g(\gamma (s))\eta (1-\eta \kappa (s)\cos \theta )\mathrm{d}s\mathrm{d}\theta ,\nonumber \\&=\eta \int _{0}^{L}g(\gamma (s))\int _{0}^{2\pi }(1-\eta \kappa \cos \theta )\mathrm{d}\theta \mathrm{d}s,\nonumber \\&=2\pi \eta \int g(\gamma (s))\mathrm{d}s. \end{aligned}$$

Note that the tubular part of the level surface \(\Gamma _{\eta }\) does not contain the two hemispheres of \(\Gamma _{\eta }\) which are located at the two end points of the curve \(\Gamma \). Thus,

$$\begin{aligned} \int _{\Gamma _{\eta } \setminus \left\{ C_{1}\cup C_{2}\right\} }g(P_{\Gamma }(\mathbf {x}))\mathrm{d}S_{\mathbf {x}}=2\pi \eta \int _{\Gamma }g\mathrm{d}s, \end{aligned}$$

where \(C_{1}\) and \(C_{2}\) are the two hemispheres of the level surface \(\Gamma _{\eta }\) located at the two end points of the curve \(\Gamma \). Consequently, for sufficiently small \(\epsilon \) and by the coarea formula we obtain

$$\begin{aligned} \int _{\Gamma }g(\gamma (s))\mathrm{d}s= & {} \frac{1}{2\pi }\int _{0}^{\epsilon }\left( \frac{1}{\eta }\int _{\Gamma _{\eta } \setminus \left\{ C_{1}\cup C_{2}\right\} }g(P_{\Gamma }(\mathbf {x))}\right) K_{\epsilon }(\eta )\mathrm{d}\eta ,\\= & {} \frac{1}{2\pi }\int _{\mathbb {R}^{3}}g(P_{\Gamma }(\mathbf {x))}\frac{K_{\epsilon }(d)}{d}\chi _{(C_{1}\cup C_{2})^{c}}(\mathbf {x)}\mathrm{d}\mathbf {x}, \end{aligned}$$

where \(K_{\epsilon }\) is a \(C^{1}\) averaging kernel supported in \([0,\epsilon ]\) and \(\chi _{(C_{1}\cup C_{2})^{c}}(\mathbf {x)}\) is the characteristic function of the set \((C_{1}\cup C_{2})^{c}\). Because of the term \(\frac{K_{\epsilon }(d)}{d}\) in the above equation and for better accuracy, we choose a kernel \(K_{\epsilon }\) that satisfies the condition \(K_{\epsilon }'(0) =0\). In our numerical simulations we consider the kernel

$$\begin{aligned} K_{\epsilon }^{1,1}(\eta )=\frac{1}{\epsilon }\left( 1-\cos \left( 2\pi \frac{\eta }{\epsilon }\right) \right) \chi _{[0,\epsilon ]}(\eta ). \end{aligned}$$

Since the formulation above does not use the two hemispheres located at both end points of the curve, to integrate over the tubular part of \(\Gamma _{\eta }\) only, it is necessary to subtract the integration over each of the hemispheres \(C_{1}\) and \(C_{2}\) . The result can be summarized in the following proposition:

Proposition 4

Consider a single \(C^{2}\) curve \(\Gamma \) in \(\mathbb {R}^{3}\) parameterized by \(\gamma (s)\) where s is the arclength parameter, and let d be the distance function to \(\Gamma \). We define \(K_{\epsilon }\) to be a \(C^{1}\) averaging kernel compactly supported in \([0,\epsilon ]\) and \(P_{\Gamma }:\mathbb {R}^{3}\mapsto \Gamma \) to be the closest point mapping to \(\Gamma \).

If g is a continuous function defined on \(\Gamma \) then for sufficiently small \(\epsilon >0\) we have

$$\begin{aligned} \int _{\Gamma }g(\gamma (s))\mathrm{d}s=\frac{1}{2\pi }\int _{\mathbb {R}^{3}}g(P_{\Gamma }(\mathbf {x)})\frac{K_{\epsilon }(\mathrm{d}(\mathbf {x}))}{\mathrm{d}(\mathbf {x})}\mathrm{d}\mathbf {x}-2\int _{0}^{\epsilon }g(\mathbf {x}_{\eta })\eta K_{\epsilon }(\eta )\mathrm{d}\eta , \end{aligned}$$

where \(\mathbf {x}_{\eta }\) is a point on a sphere of radius \(\eta \).

Note that for the computation of the length of a curve, the correction terms given by integrating over both \(C_{1}\) and \(C_{2}\) is

$$\begin{aligned} \int _{0}^{\epsilon }\frac{K_{\epsilon }(\eta )}{\eta }|\mathbb {S}^{1}|\mathrm{d}\eta&=\int _{0}^{\epsilon }\frac{K_{\epsilon }(\eta )}{\eta }4\pi \eta ^{2}\mathrm{d}\eta =2\pi \epsilon . \end{aligned}$$

This simple correction is, however, not suitable for more general cases that contain multiple curve segments and several integrands. We shall derive a more elegant and seamless way to perform such correction in the following section.

Now if we consider a \(C^{2}\) curve in three dimensions and let \(P_{\Gamma }\) be its closest point mapping, we have the following proposition:

Theorem 5

Let \(\sigma (\mathbf {x})\) be the nonzero singular value of \(P'_{\Gamma }\) and let g be a continuous function defined on \(\Gamma \). If \(\gamma (s)\) is the arclength parameterization of \(\Gamma \) and if \(\epsilon \max _{x \in \Gamma } |\kappa (x)|<1\), where \(\kappa (x)\) is the curvature of the curve at x, we have

$$\begin{aligned} \int _{\Gamma }g(\gamma (s))\mathrm{d}s=\frac{1}{2\pi }\int _{\mathbb {R}^{3}}g(P_{\Gamma }(\mathbf {x}))\frac{K_{\epsilon }(d)}{d}\sigma (\mathbf {x})\mathrm{d} \mathbf {x}, \end{aligned}$$

where d is the distance function to \(\Gamma \).


Since \(K_{\epsilon }\) is compactly supported in \([0,\epsilon ]\) it is sufficient to consider points in the tubular neighborhood of the curve \(\Gamma \). Thus, for \(\mathbf {x}\) in the tubular neighborhood, there exists \(0 \le \eta \le \epsilon \) such that \(\mathbf {x}\in \Gamma _{\eta }\).

Case 1: \(\mathbf {x}\) is on the spherical part of \(\Gamma _{\eta }\) corresponding to the \(\eta \)-distance to either of the two end points of the curve \(\Gamma \). WLOG we assume that \(\mathbf {x}\) is at a distance \(\eta \) from the first end point \(C_{1}\) parameterized by \(\gamma (0)\). The result is the same if \(\mathbf {x}\) is on the other sphere, i.e., at a distance \(\eta \) from the other end point \(C_{2}.\) In that case, \(P_{\Gamma }(\mathbf {x)}=\gamma (0)\) for all \(\mathbf {x}\) on the spherical part so that the Jacobian matrix \(P'_{\Gamma }=0\). Therefore, for \(\mathbf {x}\) on the spherical part of \(\Gamma _{\eta }\), all singular values of the Jacobian matrix are zero.

Case 2: \(\mathbf {x}\) is on the tubular part of \(\Gamma _{\eta }\). In that case, if we use the Frenet frame centered at the point \(\mathbf {x=\mathbf {x}}(s,\theta ,\eta )\in \Gamma _{\eta }\) , we can write \(\mathbf {x}\) in the new coordinate system \((\vec {\mathbf {T}},\vec {\mathbf {N}},\vec {\mathbf {B}})\) as

$$\begin{aligned} \mathbf {x}=\gamma (s)+v\vec {\mathbf {N}}+w\vec {\mathbf {B}}, \end{aligned}$$

where \(u=0\) is the coordinate of \(\mathbf {x}\) along \(\vec {\mathbf {T}}\), v is the coordinate along \(\vec {\mathbf {N}}\) and w is the coordinate along \(\vec {\mathbf {B.}}\) Since the projection \(P_{\Gamma }(\mathbf {x})=\gamma (s)\) does not depend on v nor w (since the plane \((\vec {\mathbf {N}},\vec {\mathbf {B}})\) is normal to the curve \(\Gamma \)) it follows that

$$\begin{aligned} \frac{\partial P_{\Gamma }(\mathbf {x})}{\partial v}=\frac{\partial P_{\Gamma }(\mathbf {x})}{\partial w}=0. \end{aligned}$$

On the other hand, we have

$$\begin{aligned} \frac{\partial P_{\Gamma }(\mathbf {x})}{\partial u}=\frac{\partial \gamma (s)}{\partial u}=\frac{\partial s}{\partial u}\frac{\partial \gamma (s)}{\partial s}=\frac{\partial s}{\partial u}\vec {\mathbf {T},} \end{aligned}$$

where \(\frac{\partial s}{\partial u}\) is the variation of the arclength parameter s with respect to u when the point \(\mathbf {x}\) is moving on \(\Gamma _{\eta }\) along the tangential direction \(\vec {\mathbf {T}}\). Since u is the arclength parameter along the tangential direction \(\vec {\mathbf {T}}\), it follows that we have a unit speed parameterization along \(\vec {\mathbf {T}}\) giving the identity

$$\begin{aligned} \frac{\partial \mathbf {x}}{\partial u}\cdot \vec {\mathbf {T}}=1. \end{aligned}$$

In addition,

$$\begin{aligned} \frac{\partial \mathbf {x}}{\partial s}&=\frac{\partial \gamma (s)}{\partial s}+v\frac{\partial \vec {\mathbf {N}}}{\partial s}+w\frac{\partial \vec {\mathbf {B}}}{\partial s}\\&=\vec {\mathbf {T}}-\kappa v\vec {\mathbf {T}}+\tau v\vec {\mathbf {B}}-\tau w\vec {\mathbf {N}}\\&=\left( 1-\kappa v\right) \vec {\mathbf {T}}-\tau w\vec {\mathbf {N}}+\tau v\vec {\mathbf {B}}, \end{aligned}$$

where \(\kappa \) is the curvature of \(\Gamma \) at \(\gamma (s)\) and \(\tau \) is the torsion of the curve \(\Gamma \) at the point \(\gamma (s)\). Since the level surface \(\Gamma _{\eta }\) is a tube of radius \(\eta \), its intersection with the normal plane \((\vec {\mathbf {N}},\vec {\mathbf {B}})\) is a circle of radius \(\eta \). Hence if we use polar coordinates on the normal plane, we obtain \(v=\eta \cos \theta \) and \(w=\eta \sin \theta \) . It follows that

$$\begin{aligned} \frac{\partial \mathbf {x}}{\partial s}\cdot \vec {\mathbf {T}}=1-\kappa \eta \cos \theta . \end{aligned}$$

Consequently we have

$$\begin{aligned} \frac{\partial \mathbf {x}}{\partial u}\cdot \vec {\mathbf {T}}=1=\frac{\partial s}{\partial u}\frac{\partial \mathbf {x}}{\partial s}\cdot \vec {\mathbf {T}}=\frac{\partial s}{\partial u}(1-\kappa \eta \cos \theta ), \end{aligned}$$


$$\begin{aligned} \frac{\partial s}{\partial u}=\frac{1}{1-\kappa \eta \cos \theta }. \end{aligned}$$

Therefore, in the Frenet frame, the Jacobian matrix of the closest point projection map can be written as

$$\begin{aligned} P'_{\Gamma }=\left( \begin{array}{c@{\quad }c@{\quad }c} \frac{1}{1-\kappa \eta \cos \theta } &{} 0 &{} 0\\ 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 \end{array}\right) , \end{aligned}$$

where \(\frac{1}{1-\kappa \eta \cos \theta }\) is the nonzero eigenvalue of the Jacobian of the closest point mapping. Based on the hypothesis on the size of \(\epsilon \) related to the geometry of the curve \(\Gamma \), the term \(\frac{1}{1-\kappa \eta \cos \theta }\) is strictly positive and therefore is also the singular value \(\sigma (\mathbf {x})\) of the Jacobian of the closest point mapping.

Therefore we have

$$\begin{aligned} \sigma (\mathbf {x})={\left\{ \begin{array}{ll} 0 &{}\quad \text{ if } \mathbf {x} \text{ is } \text{ on } \text{ the } \text{ spherical } \text{ part } \text{ of } \Gamma _{\eta },\\ \frac{1}{1-\kappa \eta \cos \theta } &{}\quad \text{ if } \mathbf {x} \text{ is } \text{ on } \text{ the } \text{ tubular } \text{ part } \text{ of } \Gamma _{\eta }. \end{array}\right. } \end{aligned}$$

Now using (17) and (18) we obtain

$$\begin{aligned} \int _{\Gamma _{\eta }}g(P_{\Gamma }(\mathbf {x))}\sigma (\mathbf {x)}\mathrm{d}S_{\mathbf {x}}= & {} \int _{\Gamma _{\eta } \setminus \left\{ C_{1}\bigcup C_{2}\right\} }g(P_{\Gamma }(\mathbf {x))}\sigma (\mathbf {x)}\mathrm{d}S_{\mathbf {x}}\\= & {} \int _{0}^{2\pi }\int _{0}^{L}g(P_{\Gamma }(\mathbf {x))}\sigma (\mathbf {x)}|\mathbf {\mathbf {x_{s}\times \mathbf {x_{\theta }}}}|\mathrm{d}s\mathrm{d}\theta \\= & {} \int _{0}^{2\pi }\int _{0}^{L}g(\gamma (s))\eta \frac{1-\eta \kappa (s)\cos \theta }{1-\eta \kappa (s)\cos \theta }\mathrm{d}s\mathrm{d}\theta \\= & {} 2\pi \eta \int _{0}^{L}g(\gamma (s))\mathrm{d}s. \end{aligned}$$

It follows that for \(K_{\epsilon }\) a \(C^{1}\) averaging kernel compactly supported in \([0,\epsilon ]\), for sufficiently small \(\epsilon \) and by the coarea formula, we have

$$\begin{aligned} \int _{\Gamma }g\mathrm{d}s= & {} \frac{1}{2\pi }\int _{0}^{\epsilon }\frac{1}{\eta }\int _{\Gamma _{\eta }}g(P_{\Gamma }(\mathbf {x))\sigma (\mathbf {x)}}K_{\epsilon }(\eta )\mathrm{d}\eta \\= & {} \frac{1}{2\pi }\int _{\mathbb {R}^{3}}g(P_{\Gamma }(\mathbf {x))}\frac{K_{\epsilon }(d)}{d}\sigma (\mathbf {x)}\mathrm{d}\mathbf {x.} \end{aligned}$$

\(\square \)

3 Numerical simulations

In this section, we investigate the convergence of our numerical integration using simple Riemann sums over uniform Cartesian grids. Unless stated otherwise, the singular values are computed from the matrix, the elements of which are computed by the standard central difference approximations of the Jacobian matrix \(P'_{\Gamma }.\) In other words, the Jacobian matrix \(P'_{\Gamma }\) is computed by using finite differences to evaluate the partial derivatives of each component of \(P_\Gamma (\mathbf {x})\); more precisely, if \(P_\Gamma (\mathbf {x}) ~= ~(p_{1}(\mathbf {x}), p_{2}(\mathbf {x}), p_{3}(\mathbf {x}))\), and \(\mathbf {x}~=~(x_{1},x_{2},x_{3})\) we use finite difference to approximate \(\frac{\partial p_{j}}{\partial x_{k}}\) for \(1 \le j,k \le 3\). We do not evaluate the expressions that involve the partial derivatives of the distance function.

In our computations we use the cosine kernel

$$\begin{aligned} K_{\epsilon }^{\text{ cos }}(\eta )=\chi _{[-\epsilon ,\epsilon ]}(\eta )\frac{1}{2\epsilon }\left( 1+\cos \left( \frac{\pi \eta }{\epsilon }\right) \right) \end{aligned}$$

for integration on surfaces of codimension 1, and the kernel \(K_{\epsilon }^{1,1}\) defined in (19) for codimension 2. With these compactly supported kernels, formulas (16) and (21) can be considered integration of functions defined on suitable hypercubes, periodically extended. In such settings, simple Riemann sums on Cartesian grids are equivalent to sums using Trapezoidal rule, and if all the terms are known analytically, the order of accuracy will be related in general to the smoothness of the integrands; exception can be found when the normals of the surfaces are rationally dependent on the step sizes used in the Cartesian grids.

Table 1 Errors for a portion of circle

3.1 Integration of codimension one surfaces

We tested our numerical integration on two different portions of circle, a torus, a quarter sphere and a three quarter sphere. We computed their respective lengths or surface areas by integrating the constant 1 over the curve or surface. Each of these tests were designed to exhibit the convergence rate of our formulations on cases with varying difficulty. In particular, the convergence rate of our formulation depends on the smoothness of the closest point mapping inside the tubular neighborhood of the curve or surface.

The results for the portions of circle are given in Tables 1 and 2. In the first convergence studies (Table 1), the line where the closest point mapping has a jump discontinuity is parallel to the grid lines. In this case, we see a second-order convergence rate using central differencing to compute the Jacobian matrix \(P'_{\Gamma }\). In the second test case, however, the portion of circle is chosen so that the line where the closest point mapping has a jump discontinuity is not parallel to the grid lines. In that case, the normal to the curve is rationally dependent on the step size of the Cartesian grid and the convergence rate reduces to first order even though we used central differencing to compute \(P'_{\Gamma }\). We note that in these two tests, we chose \(\epsilon \) (the half width of the tubular neighborhood around the curve) small enough so that the line where the closest point mapping is discontinuous is outside of it.

Table 2 Errors for a tilted portion of circle
Table 3 Errors for a torus

In three dimensions, we first tested our method on a torus (closed smooth surface). The results for the torus are reported in Table 3. In this case the closest point mapping is very smooth and we see third-order convergence when using the exact signed distance function and a third-order difference scheme to approximate \(P'_{\Gamma }\) (see \(\text{ RE }_{\infty }\) in Table 3). We also tested our method with a computed signed distance function. We constructed the signed distance function using the algorithm described in [2], and compared the performance of our method with a fourth-order accurate signed distance function and a first order accurate signed distance function (see \(\text{ RE }_{4}\) and \(\text{ RE }_{1}\) in Table 3.) With the fourth-order accurate signed distance function we used a third-order accurate difference scheme to approximate \(P'_{\Gamma }\), and with the first-order accurate signed distance function we used a second-order accurate difference scheme to approximate \(P'_{\Gamma }\).

Fig. 4
figure 4

Three quarter sphere. The three quarter sphere and its corresponding \(\eta \)-level set \(\Gamma _{\eta }\)

For surfaces with boundaries, we tested the method on a quarter sphere and a three quarter sphere. The three quarter sphere case is illustrated in Fig. 4. The reason for choosing these two cases is because the closest point mapping has a different degree of smoothness for each of these surfaces. For the quarter sphere, the closest point mapping is smooth enough, but for the three quarter sphere, the tubular neighborhood around the surface contains the line where the closest point mapping has a jump discontinuity. In that latter case, it is therefore necessary to use an adequate one-sided discretization to compute \(P'_{\Gamma }\) accurately. The one-sided discretization that we used is reported in Sect. 3.3. The test for the quarter sphere still uses central differencing to compute \(P'_{\Gamma }\). The results for the portions of sphere are reported in Tables 4 and 5.

Table 4 Errors for a quarter sphere
Table 5 Errors for a three quarter sphere
Fig. 5
figure 5

Coil and one of its level sets. The coil and one of the level sets of the distance function to the coil used in the reported numerical simulations

3.2 Integrating along curves in three dimensions

In codimension 2, we tested our numerical integration on a coil wrapped around the helix defined parametrically as

$$\begin{aligned} \mathbf {x}(t)=\left( r\cos (t), r\sin (t), bt \right) , \end{aligned}$$

with \(r=0.75\) and \(b=0.25\). The coil is then wrapped around the helix at a distance of 0.2 from the helix. See Fig. 5. As our test case, we computed the length of the coil by integrating 1 along the curve. The results are reported in Table 6.

Table 6 Errors for a coil

3.3 One-sided discretization of the Jacobian matrix

Here for completeness, we describe the one-sided discretization used in computing results reported in Table 5. For simplicity we provide the explanation in \(\mathbb R^2\). The discretization generalizes easily to 3D.

We will describe the one-sided discretization for a uniform Cartesian grid in \(\mathbb R^2\), namely for \(P_{\Gamma }(\mathbf {x}_{i,j})=(U_{i,j},V_{i,j})\) with \(\mathbf {x}_{i,j}=(ih,jh),\) \(i,j\in \mathbb {Z}\) and \(h>0\) being the step size. The Jacobian matrix will be approximated by simple finite differences defined below:

$$\begin{aligned} P'_{\Gamma }(\mathbf {x}_{i,j})\approx \left( \begin{array}{c@{\quad }c} (U_{x})_{i,j} &{} (U_{y})_{i,j}\\ (V_{x})_{i,j} &{} (V_{y})_{i,j} \end{array}\right) . \end{aligned}$$

The discretization of U and V have to be defined together because the two functions are not independent of each other. With

$$\begin{aligned} (U_{x}^{\pm })_{i,j}:=\pm \frac{1}{2h}\left( -3U_{i,j}+4U_{i\pm 1,j}-U_{i\pm 2,j}\right) , \end{aligned}$$

and the smoothness indicator

$$\begin{aligned} S^\pm _{i,j}=S^{\pm }(U_{i,j}):=\triangle ^{+}\triangle ^{-}U_{i\pm 1,j} \end{aligned}$$

we define

$$\begin{aligned} (U_{x})_{i,j}:={\left\{ \begin{array}{ll} (U_{x}^{+})_{i,j}, &{} \text{ if } |S_{i,j}^{+}|\le |S_{i,j}^{-}|,\\ (U_{x}^{-})_{i,j}, &{} \text{ otherwise, } \end{array}\right. } \end{aligned}$$

and \((V_{x})_{i,j}\) is defined according to the choice of stencil based on \(S^{\pm }(U_{i,j})\)

$$\begin{aligned} (V_{x})_{i,j}:={\left\{ \begin{array}{ll} (V_{x}^{+})_{i,j}, &{} \text{ if } |S_{i,j}^{+}|\le |S_{i,j}^{-}|,\\ (V_{x}^{-})_{i,j}, &{} \text{ otherwise. } \end{array}\right. } \end{aligned}$$

The discretization of \(U_{y}\) and \(V_{y}\) is defined similarly with the choice of the stencil based on \(S^{\pm }(V_{i,j})\).

4 Summary

In this paper, we presented a new approach for computing integrals along curves and surfaces that are defined either implicitly by the distance function to these manifolds or by the closest point mappings. We are motivated by the abundance of discrete point sets sampled from surfaces using devices such as LIDAR, the need to compute functionals defined over the underlying surfaces, as well as many applications involving the level set method or the use of closest point methods.

Contrary to most other existing approximations using either smeared out Dirac delta functions or locally obtained parameterized patches, we derive a volume integral in the embedding Euclidean space which is equivalent to the desired surface or line integrals. This allows for easy construction of higher-order numerical approximations of these integrals. The key components of this new approach include the use of singular values of the Jacobian matrix of the closest point mapping, which can be computed easily to high order even by simple finite differences.