1 Background

Let \(\varGamma \) be a closed and compact \(C^{2}\) hypersurface that separates \({{\mathbb {R}}}^{m}\), \(m=2,3\), into a simply connected and bounded open region \(\varOmega \) and its complement. We consider the solution of the following Neumann boundary value problem for the Helmholtz equation:

$$\begin{aligned} {\left\{ \begin{array}{ll} \Delta u(x)+k^{2}u(x)=0, &{} x\in {{\mathbb {R}}}^{m}{\setminus }\bar{\varOmega },\\ \frac{\partial u}{\partial n}(x)=g(x), &{} x\in \partial \varOmega ,\\ \lim _{|x|\rightarrow \infty }|x|^{\frac{m-1}{2}}(\frac{\partial }{\partial r}-ik)u(x)=0, &{} r=|x|. \end{array}\right. } \end{aligned}$$
(1.1)

For wave scattering by a sound hard boundary \(\partial \varOmega ,\) a total wave field \(u^{\mathrm{tot}}(x)\) is a function satisfying

$$\begin{aligned} {\left\{ \begin{array}{ll} \Delta u(x)+k^{2}u(x)=0, &{} x\in {\mathbb {R}}^{m}{\setminus }\bar{\varOmega },\\ \frac{\partial u}{\partial n}(x)=0, &{} x\in \partial \varOmega . \end{array}\right. } \end{aligned}$$
(1.2)

This total wave field is then written artificially as the sum \(u^{\mathrm{tot}}=u^{\mathrm{inc}}+u^{\mathrm{sc}}\), where \(u^{\mathrm{inc}}\) is a known function that is used to model the far-field condition of \(u^{\mathrm{tot}}\) and \(u^{\mathrm{sc}}\) is the solution of (1.1) with the boundary data

$$\begin{aligned} g(x)=-\frac{\partial u^{\mathrm{inc}}}{\partial n}(x),\quad x\in \partial \varOmega . \end{aligned}$$

Our work targets at applications that require numerical solutions of (1.2) in exterior domains that may go through a sequence of significant deformation. One of the potential applications that meets this kind of assumptions is shape optimization involving inverse scattering. In such scenarios,

  • The implicit representation of the domain boundaries is a natural choice and is convenient for handling changes in the shapes;

  • Boundary integral formulation is a natural choice as typical applications assume that scattering take place in unbounded domains in which wave propagate at a constant speed;

  • The solution of the Helmholtz equation is needed only on a lower-dimensional set, where receivers of the scattered wave field are positioned.

The novelty of the proposed algorithm involves (i) the assumption that \(\varGamma :=\partial \varOmega \) is defined by the zero level set of a signed distance function or the closest point mapping to it and (ii) a new way of computing surface integrals of the so-called hypersingular kernels.

1.1 Boundary integral formulations of the Helmholtz equation

Let \(\varPhi (x,y)\) be the fundamental solution of the Helmholtz equation:

$$\begin{aligned} \varPhi (x,y)={\left\{ \begin{array}{ll} \frac{i}{4}H_{0}^{(1)}(k|x-y|), &{} \;x,y\in {\mathbb {R}}^{2},\\ \frac{1}{4\pi |x-y|}e^{ik|x-y|}, &{} \;x,y\in {\mathbb {R}}^{3}, \end{array}\right. } \end{aligned}$$
(1.3)

where \(H_{0}^{(1)}\) is the first kind Hankel function of degree 0. The solution u of the boundary value problem (1.1) can be represented by

$$\begin{aligned} u(x)=\int _{\varGamma }\varPhi (x,y)\alpha (y)\,\hbox {d}s(y),\quad x\in \bar{\varOmega }^{c}, \end{aligned}$$
(1.4)

where the potential density \(\alpha :\varGamma \mapsto {\mathbb {C}}\) satisfies the following integral equation

$$\begin{aligned} g(x)=\frac{\partial u}{\partial n}(x)=\int _{\varGamma }\frac{\partial \varPhi }{\partial n_{x}}(x,y)\alpha (y)\,\hbox {d}s(y)-\frac{1}{2}\alpha (x),\quad x\in \varGamma . \end{aligned}$$
(1.5)

Here \(g:\varGamma \mapsto {\mathbb {C}}\) is the given Neumann data as in (1.1). The sign in front of the “diagonal” term, \(\frac{1}{2}\alpha \) in (1.5), depends on the sign of the fundamental solution and the notion of surface normals.

While (1.1) is well-posed for a large class of \(\varOmega ,\) however, the integral equation (1.5) is not uniquely solvable if k is an eigenvalue of the interior problem that is used to derive (1.4). In such a case, the integral operator has a nontrivial subspace; see e.g. [8].

In this paper, we focus on a boundary integral formulation that is derived from the “combined field” representation [2, 18]

$$\begin{aligned} u(\tilde{x})=\int _{\varGamma }\frac{\partial \varPhi }{\partial n_{y}}(\tilde{x},y)\beta (y)\,\hbox {d}s(y)-i\xi \int _{\varGamma }\varPhi (\tilde{x},y)\beta (y) \,\hbox {d}s(y), \quad \tilde{x}\in \bar{\varOmega }^{c}, \end{aligned}$$
(1.6)

where \(\xi \) is some real number and the density \(\beta :\varGamma \mapsto {\mathbb {C}}\) satisfies

$$\begin{aligned} g(x)&=\lim _{\eta \rightarrow 0^{+}}\frac{\partial u}{\partial n}(x+\eta n_{x})\nonumber \\&=\lim _{\eta \rightarrow 0^{+}}\frac{\partial }{\partial n_{x}}\int _{\varGamma }\frac{\partial \varPhi }{\partial n_{y}}(x+\eta n_{x},y)\beta (y)\,\hbox {d}s(y)\nonumber \\&\quad -i\xi \Big (\int _{\varGamma }\frac{\partial \varPhi }{\partial n_{x}}(x,y)\beta (y)\,\hbox {d}s(y)-\frac{1}{2}\beta (x)\Big ), \end{aligned}$$
(1.7)

for all \(x\in \varGamma \). The “combined field” formulation (1.6)–(1.7) is preferred because it does not have the problem of non-uniqueness as discussed above. While in theory the parameter \(\xi \) can be any real number, one might wish to use a value which is better in practice for computation. This is discussed in Sect. (2.4.1).

Fig. 1
figure 1

The blue and dashed curves are the graphs of a hypersingular kernel \(\frac{\partial ^{2}\Psi }{\partial n_{x_{\eta }}\partial n_{y}}(x_{\eta },y)\) as a function of y, restricted on the unit circle. \(\Psi (x,y)=\log |x-y|.\) The \(\eta =1\) in the blue curve and \(\eta =3\) in the dashed curve

Notice that for \(\eta \ne 0\), \(x+\eta n_{x}\notin \varGamma \), the derivative of the right-hand side of (1.6) in the normal direction exists and the commutation of differentiation and integration is justified. For convenience, let \(x_{\eta }:=x+\eta n_{x}\). Thus for any \(\eta \ne 0\), the integral

$$\begin{aligned} \int _{\varGamma }\frac{\partial ^{2}\varPhi }{\partial n_{x_{\eta }}\partial n_{y}}(x_{\eta },y)\beta (y)\,\hbox {d}s(y) \end{aligned}$$
(1.8)

can be naturally interpreted in the Riemann sense. However, for \(\eta =0\), i.e., \(x_{\eta }=x\in \varGamma \), (1.8) requires some special interpretation because the second derivative of \(\varPhi \) is no longer integrable. For example, in three dimensions, \(\frac{\partial ^{2}\varPhi }{\partial n_{x}\partial n_{y}}\sim \mathcal {O}(|x-y|^{-3})\). In this case, (1.8) is often referred to as a hypersingular integral. Figure 1 shows the graph of \(\frac{\partial ^{2}\Psi }{\partial n_{x_{\eta }}\partial n_{y}}(x_{\eta },y)\) for y lying on a unit circle and for two different values of \(\eta \). It also shows that the singularity in \(\frac{\partial ^{2}\Psi }{\partial n_{x_{\eta }}\partial n_{y}}(x_{\eta },y)\), as \(x_{\eta }\) approaches \(y\in \varGamma \), is not concentrated enough to be removed easily. However, for any \(x_{\eta }\notin \varGamma \), the integral (1.8) is well defined due to cancelation of the positive and negative parts of the integrand. In two dimensions, the hypersingular integral can be interpreted in the Hadamard’s sense. But this strategy is inconvenient for numerical computation and does not generalize well to three dimensions. A more common approach is to rewrite the first term in (1.7) into an integral that involves partial derivatives of \(\phi \), first derived in [15]:

$$\begin{aligned} \int _{\varGamma }\left( n_{x}\times \nabla _{x}\varPhi (x,y)\right) \cdot \left( n_{y}\times \nabla _{y}\beta (y)\right) \hbox {d}s(y)+k^{2}\int _{\varGamma }n_{x}\cdot n_{y}\varPhi (x,y)\beta (y)\hbox {d}s(y). \end{aligned}$$
(1.9)

There has been a lot of work in developing efficient numerical algorithms for solving the combined field integral equations based on (1.9), and they all rely on explicit parameterization of \(\varGamma \). See, for example, [3] for an extensive review of the related work as well as an efficient method based on regularization of (1.9). The method that we propose in the next section makes use of the regularity of \(\partial u{/}\partial n\) and approximate directly its limit defined in (1.7).

2 The implicit boundary integral methods

In this section, we first describe the implicit boundary integral formulation of [11] as it lays out the foundation of the proposed algorithm. The main contribution of this paper is presented in Sects. 2.3 and 2.4. We first introduce some essential definitions below.

Definition 1

Let \(d_{\varOmega }:{\mathbb {R}}^{m}\mapsto {\mathbb {R}}\) be the signed distance function to \(\varGamma =\partial \varOmega \) such that \(d_{\varOmega }(x)>0\) for \(x\in \bar{\varOmega }^{c}\) and \(d_{\varOmega }(x)<0\) for \(x\in \varOmega .\) Define the normal vector \(n_{x}:=\nabla d_{\varOmega }(x)\) and the normal derivative of a function f at x to be \(\partial f/\partial n=\partial f/\partial n_{x}:=\lim _{h\rightarrow 0+}\nabla f(x+hn_{x})\cdot n_{x}\).

Definition 2

Let \(\varGamma =\partial \varOmega \) be a smooth compact and closed hypersurface in \({\mathbb {R}}^{d}\) . The closest point mapping \(P_{\varGamma }:{\mathbb {R}}^{d}\mapsto \varGamma \) is defined by

$$\begin{aligned} P_{\varGamma }(x):=\mathrm {argmin}_{y\in \varGamma }|x-y|. \end{aligned}$$

Given a pair of points x and y,  we define further the mapping of x to its closest point on \(\varGamma _{d_{\varOmega }(y)},\) the level set of \(d_{\varOmega }\) where y lies:

$$\begin{aligned} P_{\varGamma }^{y}(x):=\mathrm {argmin}_{\{z:d_{\varOmega }(z)=d_{\varOmega }(y)\}}|x-z|. \end{aligned}$$

We remark here that when the (unsigned) distance from x to \(\varGamma \) is larger than the principle curvatures of \(\varGamma \), \(P_{\varGamma }(x)\) and \(P_{\varGamma }^{y}(x)\) may be formally ill-defined because multiple points on \(\varGamma \) could be equally distant to x. In this case, we shall randomly assign one of the equidistant points as the definition of \(P_{\varGamma }(x)\) and \(P_{\varGamma }^{y}(x)\).

If the distance function \(d_{\varOmega }\) is differentiable at x,

$$\begin{aligned} P_{\varGamma }(x)= & {} x-d_{\varOmega }(x)\nabla d_{\varOmega }(x), \end{aligned}$$
(2.1)
$$\begin{aligned} P_{\varGamma }^{y}(x)= & {} x-(d_{\varOmega }(y)-d_{\varOmega }(x))\nabla d_{\varOmega }(x). \end{aligned}$$
(2.2)

The computation of the signed distance functions and the closest point mappings are by now considered standard routines in the level set methods [17] and can be done to high-order accuracy in many different ways, e.g., [1, 5, 21, 24], where by extending the interface coordinates as constants along interface normals, \(P_{\varGamma }\) can be computed easily to fourth-order in the grid spacing. Once an accurate distance function is computed on the grid, its gradient can be computed by standard finite differencing or by more accurate but wider WENO stencils [7]. For wider stencils, it is necessary to consider one-sided non-oscillatory approximations of the derivatives in order to minimize errors due to differencing over kinks of the distance function. The closest point mapping is used in [22] as an Eulerian method to track interfaces and in [14, 20] for solving PDEs on surfaces. In the case that \(\varGamma \) is given as a collection of parameterized patches, one may use the fast algorithm proposed in [23] to compute the closest point mappings.

2.1 Exact integral formulations using signed distance functions

Let \(\beta \) be a smooth function defined on \(\varGamma .\) We are interested in computing the integral over \(\varGamma \)

$$\begin{aligned} I[\beta ]:=\int _{\varGamma }\beta (y)\hbox {d}s(y), \end{aligned}$$
(2.3)

where \(\varGamma \) is defined by either the closest point mapping \(P_{\varGamma }\) or the signed distance function \(d_{\varOmega }\). Below, we follow the construction first introduced in [11] to derive a volume integral \(I_{\epsilon }[\beta ]\) that is identical to \(I[\beta ].\)

We consider the closest point mapping \(P_{\varGamma }\) from the \(\eta \)-level set of the distance function, \(\varGamma _{\eta }:=\left\{ x:d_{\varGamma }(x)=\eta \right\} \) to \(\varGamma \), and use it as a change of variables to rewrite I[v]. For any \(\eta \in {\mathbb {R}}\) whose absolute value is smaller than the principle curvatures of \(\varGamma \), \(P_{\varGamma }\) is a diffeormorphism between \(\varGamma _{\eta }\) and \(\varGamma \), and therefore,

$$\begin{aligned} \int _{\varGamma }\beta (y)\hbox {d}s(y)=\int _{\varGamma _{\eta }}\beta (P_{\varGamma }(y))\, J_{\varGamma }(y)\hbox {d}s(y), \end{aligned}$$
(2.4)

where the Jacobian \(J_{\varGamma }(y)\) comes from the change of variable defined by the restriction of \(P_{\varGamma }\) on \(\varGamma _{\eta }\). In [11, 12] , it is shown that

$$\begin{aligned} J_{\varGamma }(y)=\prod _{j=1}^{m-1}\sigma _{j}\left( P_{\varGamma }^{\prime }(y)\right) =1+d_{\varOmega }(y)H(y)+d_{\varOmega }^{2}(y)G(y), \end{aligned}$$

where H(y), G(y) are the mean and Gaussian curvatures of \(\varGamma _{d_{\varOmega }(y)}\) (for \(m=2\), H(y) is the curvature of the level set and \(G(y)\equiv 0\)), and \(\sigma _{j}(P_{\varGamma }^{\prime }(y))\) is the jth singular value of the Jacobian matrix \(P_{\varGamma }^{\prime }(y).\)

Averaging the integrals in (2.4) with an averaging kernel, \(\delta _{\epsilon }\), compactly supported in \((-\epsilon ,\epsilon )\) and having unit mass, we obtain

$$\begin{aligned} I[\beta ]=\int _{\varGamma }\beta (y)\hbox {d}s=\int _{-\epsilon }^{\epsilon }\delta _{\epsilon }(\eta ) \int _{\varGamma _{\eta }}\beta (P_{\varGamma }(y))J_{\varGamma }(y)\hbox {d}s~\hbox {d}\eta . \end{aligned}$$

Finally, an application of the coarea formula [6] to the integral on the right-hand side, we obtain

$$\begin{aligned} I[\beta ]=\int _{|d_{\varOmega }|<\epsilon }\beta (P_{\varGamma }(y))\,J_{\varGamma }(y) \delta _{\epsilon }(d_{\varOmega }(y))\,\hbox {d}y. \end{aligned}$$
(2.5)

We remark that it is not necessary to use an averaging kernel that is supported in \((-\epsilon ,\epsilon ).\) In Sect. 2.3, we shall mainly consider kernels that are supported in \((0,\epsilon )\) or \((\theta \epsilon ,\epsilon )\) for some \(\theta \in [0,1).\)

The Jacobian \(J_{\varGamma }\) required by this formulation can be easily evaluated by computing the curvatures H and G directly using standard centered finite differencing applied to the distance function, as in [11], or it could be very easily computed by singular value decomposition applied to a difference approximation of \(P_{\varGamma }^{\prime }\), as in [12]. The exact integral formulation we use in this work was first proposed in [11] and extended in [12]. It is also generalized in [4] to simulate the Mullins–Sekerka dynamics of complicated interface geometry in unbounded domains.

2.2 Implicit boundary integral methods (IBIMs)

We now describe the implicit boundary integral formulation for integral equation of the form

$$\begin{aligned} \int _{\varGamma }\beta (y)K(x,y)\hbox {d}s(y)+\lambda \beta (x)=f(x),~~~~\text{ for } x\in \varGamma . \end{aligned}$$
(2.6)

The implicit boundary integral formulation involves extensions of the potential density \(\beta \), the boundary data f and restriction of the kernel K.

Definition 3

(Extension and restriction of functions via the closest point mapping to \(\varGamma \)) Let \(u:\varGamma \mapsto {\mathbb {C}}\) , the constant extension of u along the normals of \(\varGamma \) is defined as \(\bar{u}:{\mathbb {R}}^{m}\mapsto {\mathbb {C}}\),

$$\begin{aligned} \bar{u}(x):=u\left( P_{\varGamma }(x)\right) . \end{aligned}$$

Assuming that \(\varGamma \) is a \(C^{2}\) hypersurface. In a sufficiently small neighborhood of \(\varGamma \), the weighted extension of u is defined as

$$\begin{aligned} \bar{u}_{\varGamma }(x):=u(P_{\varGamma }(x))J_{\varGamma }(x). \end{aligned}$$

Let \(K:{\mathbb {R}}^{m}\times {\mathbb {R}}^{m}\mapsto {\mathbb {C}}\), the restriction of K to \(\varGamma \) is defined by

$$\begin{aligned} K_{\varGamma }(x,y):=K(P_{\varGamma }(x),P_{\varGamma }(y)). \end{aligned}$$

Hence, for \(\epsilon \) sufficiently small, we derive the integral equation

$$\begin{aligned} \int _{|d_{\varOmega }|\le \epsilon }\bar{\beta }(y)K_{\varGamma }(x,y) \delta _{\epsilon }(d_{\varGamma }(y))J_{\varGamma }(y)\hbox {d}y+\lambda \bar{\beta }(x)=\bar{f}(x),\quad x\in {\mathbb {R}}^{m}. \end{aligned}$$
(2.7)

Formula (2.7) provides new possibilities in discretizing the boundary integral equation (2.6) without the need of parameterization; it can be discretized by different grid geometries using different quadrature rules.

For convenience, we shall use the following notation for the tubular neighborhood of \(\varGamma \).

Definition 4

\(T_{(\epsilon _{1},\epsilon _{2})}=\{x\in {\mathbb {R}}^{d}:\epsilon _{1}<d_{\varOmega }(x)<\epsilon _{2}\}.\)

By construction, (2.7) has a unique solution if (2.6) does. Furthermore, the solution of (2.7) is automatically the constant extension of the solution of (2.6). The implication is that in forming the integral equation, either at the continuum level or at a discretized level, we do not need to artificially enforce the extension of u. This property is summarized below.

Proposition 5

Let \(\tilde{\beta }\in C^{1}(T_{(\epsilon _{1},\epsilon _{2})};{\mathbb {C}})\) be a solution of (2.7). Then

$$\begin{aligned} \frac{\partial \tilde{\beta }}{\partial n}(x)=0,\quad \forall x\in T_{(\epsilon _{1},\epsilon _{2})}. \end{aligned}$$

On uniform Cartesian grids, \(K_{\varGamma }\) in (2.7) should be replaced by a suitable regularization of the singularity in K when \(|x-y|\rightarrow 0.\) Here, we follow the approach of [11] which replaces the values of \(K_{\varGamma }(x,y)\) by a function \(\bar{K}(P_{\varGamma }(x),r_{0}),\) where \(r_{0}\) corresponds to a small regularization parameter.

2.2.1 Regularization of the double-layer potentials

Let \(K(x,y)=\frac{\partial \varPhi }{\partial n_{x}}(x,y)\) with \(\varPhi \) defined in (1.3). We then have

$$\begin{aligned} K(x,y)=\left( \varPhi ^{\prime }(|x-y|)|x-y|\right) \frac{(x-y)}{|x-y|^{2}} \cdot n_{x}={{\mathcal {O}(|x-y|^{2-m}),\quad x\rightarrow y}}; \end{aligned}$$

the singularity in \(\varPhi \) is the same order as the fundamental solution for the Laplace equation; i.e., \(\varPhi (|x-y|)=\mathcal {O}(\log |x-y|)\) in two dimensions (\(m=2)\), and \(\mathcal {O}(|x-y|^{-1})\) in three dimensions \((m=3)\).

For \(m=2,\) one can show easily that

$$\begin{aligned} \lim _{x\rightarrow y}K(x,y)=-\frac{\kappa _{\varGamma }(y)}{4\pi }, \end{aligned}$$

where \(\kappa _{\varGamma }(y)\) is the unsigned curvature of \(\varGamma \) at y and it comes from the limit of the inner product \(\frac{(x-y)}{|x-y|^{2}}\cdot n_{y}\).

For \(m=3\), K(xy) does not have a point-wise limit as \(|x-y|\rightarrow 0.\) However, the integral of K(xy) on \(U(x,r_{0}):=\{y\in \varGamma :|x-y|\le r_{0}\}\) for any \(r_{0}>0\) is well defined and has a limit as \(r_{0}\rightarrow 0.\) More precisely, let

$$\begin{aligned} \bar{K}(x,r_{0}):=\frac{1}{|U(x,r_{0})|}\int _{U(x,r_{0})}K(x,y)\hbox {d}s(y), \end{aligned}$$

where \(|U(x,r_{0})|\) is the area of \(U(x,r_{0}).\) Thus, for any smooth potential density \(\alpha \),

$$\begin{aligned} \int _{U(x,r_{0})}\bar{\alpha }(y)K(x,y)\hbox {d}s(y)=\bar{\alpha }(x)\bar{K} (x,r_{0}){{{|U(x,r_{0})|}}}+\mathcal {O}(r_{0}^{2}),\quad x\in \varGamma . \end{aligned}$$

Now suppose that the tubular neighborhood \(\{|d_{\varOmega }|<\epsilon \}\) is discretized by some grid and we only have the values of \(P_{\varGamma }\) on some grid nodes. \(\bar{K}(x,r_{0})\) has to be further approximated using such limited information. Approximating \(\varGamma \) by an osculating paraboloid, one can derive an approximation of \(\bar{K}(x,r_{0})\) for each \(x\in \varGamma \):

$$\begin{aligned} \tilde{K}(x,r_{0})&=\left( \frac{1}{4\pi r_{0}}-\frac{5}{256\pi }(4H(x)-G(x))+\left( \frac{25}{768}G(x)+\frac{k^{2}}{24\pi }\right) \,r_{0}\right) H(x)+\mathcal {O}(r_{0}^{2}), \end{aligned}$$

where H(x) and G(x) are the mean and Gaussian curvatures of \(\varGamma \) evaluated at x.

In summary, we define

$$\begin{aligned} \tilde{K}(x,r_{0}):={\left\{ \begin{array}{ll} -\frac{\kappa }{4\pi }, &{} m=2,\\ -\left( \frac{1}{4\pi r_{0}}-\frac{5}{256\pi }(4H(x)-G(x))+\left( \frac{25}{768} G(x)+\frac{k^{2}}{24\pi }\right) \,r_{0}\right) H(x), &{} m=3, \end{array}\right. } \end{aligned}$$
(2.8)

and the regularization of \(K_{\varGamma }\)

$$\begin{aligned} K^{\text {reg}}(x,y,r_{0}):=1_{\{|x-y|\ge r_{0}\}}(x,y)K(x,y)+1_{\{|x-y|<r_{0}\}}(x,y)\tilde{K}(x,r_{0}). \end{aligned}$$
(2.9)

2.2.2 The Implicit Boundary Integral Method (IBIM)

We formulate an IBIM for Helmholtz–Neumann problem (1.1) based on (1.5):

figure a

Discretization on uniform Cartesian grids and accuracy. On the uniform Cartesian grid \(h\mathbb {Z}^{d}\), (2.10) can be discretized into: \(x_{j},x_{k}\in h\mathbb {Z}^{d},\) \(|d_{\varOmega }(x_{j})|<\epsilon \):

$$\begin{aligned} \sum _{|d_{\varOmega }(x_{k})|<\epsilon }\bar{\alpha }(x_{k})\, K^{\text {reg}}(P_{\varGamma }(x_{j}),P_{\varGamma }(x_{k}),r_{0}){{J(x_{k})}} w_{\epsilon }(d_{\varGamma }(x_{k}))h^{m}-\frac{1}{2}\bar{\alpha }(x_{j}) =\bar{g}(x_{j}). \end{aligned}$$
(2.12)

And (2.11) is reduced to

$$\begin{aligned} u(x)=\sum _{|d_{\varOmega }(x_{k})|<\epsilon }\bar{\alpha }(x_{k})\, \varPhi (x,P_{\varGamma }(x_{k})){{J(x_{k})}}w_{\epsilon }(d_{\varGamma }(x_{k}))h^{m},\quad x\in \bar{\varOmega }^{c}. \end{aligned}$$
(2.13)

In practice, we found that the averaging kernel

$$\begin{aligned} w_{\epsilon }^{\text {cos}}(r)=\frac{1}{2\epsilon }\left( 1+\cos \left( {{{\frac{\pi r}{\epsilon }}}}\right) \right) \mathbf {1}_{(-1,1)}\left( {\frac{{r}}{{\epsilon }}}\right) \end{aligned}$$
(2.14)

performs quite well with \(r_{0}\) a very small constant independent of h for \(m=2,\) and \(r_{0}=C_{0}h\), \(C_{0}\ge 1\) for \(m=3.\)

The approximation error of the discrete system (2.12)–(2.13) can be grouped into an analytical error, \(E_{\text {ker}}\), and the numerical quadrature error, \(E_{\text {quad}}\):

$$\begin{aligned} \text {error}=\,E_{\text {quad}}(h,\epsilon ;w)+E_{\text {ker}}(\epsilon ,r_{0}). \end{aligned}$$
(2.15)

\(E_{\text {ker}}(\epsilon ,r_{0})\) bounds the errors made in analytical formulation of an IBIM. Assuming that (2.10) and (2.12) are invertible, \(E_{\text {ker}}(\epsilon ,r_{0})\sim \mathcal {O}(r_{0}^{2})\). \(E_{\text {quad }}(h,\epsilon ;w)\) is the numerical error of the quadrature rule used in (2.13) to approximate (2.11). On uniform Cartesian grid, since \(w\in \mathbb {K}_{q,\theta }^{(p)}\) is compactly supported, for sufficiently small \(\epsilon \), (2.13) is equivalent to applying Trapezoidal rule dimension by dimension. Hence, the accuracy of the discrete sum (2.13) depends on the smoothness of \(\bar{\alpha }(y)\phi (x,P_{\varGamma }(y))J_{\varGamma }(y)w_{\epsilon }(d_{\varOmega }(y));\) if it is \(C^{q},\) then \(E_{\text {quad}}(h,\epsilon ;w)\sim \mathcal {O}(\frac{h^{q}}{\epsilon ^{q}})\).

2.3 Extrapolative integrals

In this section, we propose a way to approximate (1.7) using the framework described in the previous section.

Let \(\varGamma \) be a closed and compact \(C^{2}\) hypersurface, and K(xy) is continuous for \(x\ne y\) such that for any \(\beta \in C^{1}(\varGamma ,{\mathbb {C}})\),

$$\begin{aligned} I[\beta ](x):=\int _{\varGamma }\beta (y)\,K(x,y)\hbox {d}s(y) \end{aligned}$$

is \(C^{p}\) in \({\mathbb {R}}^{d}{\setminus }\varGamma \) with \(p\ge 1\). We would like to estimate the limit

$$\begin{aligned} I[\beta ](x^{*}):=\lim _{\eta \rightarrow 0}I[\beta ](x^{*}+\eta n_{x^{*}}). \end{aligned}$$

Similar to (2.4), we have for any \(x^{*}\in \varGamma \) and sufficiently small \(\eta ,\)

$$\begin{aligned} I[\beta ](x^{*}+\eta n_{x^{*}})=\int _{\varGamma _{\eta }}\beta (P_{\varGamma }(y))\,K\left( x^{*}+\eta n_{x^{*}},P_{\varGamma }(y)\right) J_{\varGamma }(y)\hbox {d}s(y). \end{aligned}$$
(2.16)

The next step is again to average \(I[\beta ](x^{*}+\eta n_{x^{*}})\) using a suitable kernel.

Definition 6

Let \(\theta \in [0,1)\), p and q be two positive integers. Let \({\mathbb {K}}_{q,\theta }^{(p)}\) be the set of functions in \(C^{q}({\mathbb {R}},{\mathbb {R}})\), supported in \((\theta ,1)\), that has unit mass and p number of vanishing moments; that is

$$\begin{aligned} f\in \mathbb {K}_{q,\theta }^{(p)}\implies f\in C^{p}({\mathbb {R}},{\mathbb {R}})\quad \text {and}\quad \int _{{\mathbb {R}}}f(r)r^{\ell }\hbox {d}r={\left\{ \begin{array}{ll} 1, &{} \ell =0,\\ 0, &{} 1\le \ell \le p. \end{array}\right. } \end{aligned}$$

Let \(w\in \mathbb {K}_{q,\theta }^{(p)}\), then

$$\begin{aligned} \int _{\eta \in (\theta \epsilon ,\epsilon )}\frac{1}{\epsilon }w (\frac{\eta }{\epsilon })\,I[\beta ](x^{*}+\eta n_{x^{*}})\hbox {d}\eta&=\int _{(\theta ,1)}w(r)I[\beta ](x^{*}+\epsilon rn_{x^{*}})\hbox {d}r\\&=\int _{(\theta ,1)}w(r)\left( I[\beta ](x^{*})+ \epsilon rI^{\prime }[\beta ](x^{*})+\cdots \right) \hbox {d}r\\&=I[\beta ](x^{*})+C_{K,\beta }\,\epsilon ^{p+1}, \end{aligned}$$

where \(C_{K,\beta }\) is a constant depending on K and \(\beta .\) Hence, the role of the averaging kernel w is to extrapolate \(I[\beta ](x^{*}+\eta n_{x^{*}})\) in \(\eta n_{x^{*}}\) for estimation of \(I[\beta ](x^{*}).\)

Now, consider \(y\in T_{\epsilon }\) and \(\eta =d_{\varOmega }(y)\) in (2.16), and then applying the coarea formula, we obtain

$$\begin{aligned} \int _{\eta \in (\theta \epsilon ,\epsilon )}\frac{1}{\epsilon }w\left( \frac{\eta }{\epsilon }\right) \,I[\beta ](x^{*}+\eta n_{x^{*}})\hbox {d}\eta= & {} \int _{{\mathbb {R}}^{d}}\beta (P_{\varGamma }(y))\,K(x^{*}+d_{\varOmega } (y)n_{x^{*}},P_{\varGamma }(y))J_{\varGamma }(y)\,\frac{1}{\epsilon }w\left( \frac{d_{\varGamma }(y)}{\epsilon }\right) \,\hbox {d}y. \end{aligned}$$

For any \(x\in {\mathbb {R}}^{d}\) that is sufficiently close to \(\varGamma ,\) the normal vector \(n_{x}=\nabla d_{\varOmega }(x)\) is the same as that of its closest point on \(\varGamma \); i.e., \(n_{x}=n_{x^{*}}\) with \(x^{*}=P_{\varGamma }(x).\) In particular,

$$\begin{aligned} x^{*}=P_{\varGamma }(x)\quad \text {and }\quad \eta =d_{\varOmega }(y)\implies x^{*}+\eta n_{x}^{*}=P_{\varGamma }^{y}(x). \end{aligned}$$

In contrast to an IBIM, we impose a floating restriction of K that depends on the distance between y and \(\varGamma \). We denote

$$\begin{aligned} K_{\varGamma }^{\mathrm{EIBIM}}(x,y):=K\left( P_{\varGamma }^{y}(x),P_{\varGamma }(y)\right) . \end{aligned}$$
(2.17)

Therefore,

$$\begin{aligned} \left| I[\beta ](x)-\int _{{\mathbb {R}}^{d}}\beta (P_{\varGamma }(y))\, K_{\varGamma }^{\mathrm{EIBIM}}(x,y)J_{\varGamma }(y)\,\frac{1}{\epsilon } w\left( \frac{d_{\varGamma }(y)}{\epsilon }\right) \,\hbox {d}y\right| \le C_{K,\beta }\epsilon ^{p+1}. \end{aligned}$$

The error of a discretized EIBIM system consists of a sum of different sources

$$\begin{aligned} \text {error}=\,E_{\text {quad}}(h,\epsilon ;w)+E_{\text {ker}}(\epsilon ,r_{0};w), \end{aligned}$$

where

$$\begin{aligned} E_{\text {ker}}(\epsilon ,r_{0};w)\sim \mathcal {O}(\epsilon ^{p+1})+\mathcal {O}(r_{0}^{2}), \end{aligned}$$

and \(E_{\text {quad }}\) depends on the smoothness of the integrand as discussed in the case of IBIM. Such an extrapolative approach is proposed and analyzed in [13] for numerical integration on implicit surfaces that have corners and kinks.

2.3.1 A few extrapolative weight functions

In this section, we present some examples of extrapolative averaging kernels. In the next section, we shall present numerical results computed by kernels of the form:

$$\begin{aligned} w_{\infty ,\theta }^{(p)}(r):=\exp \left( \frac{1}{2(r-\theta )(r-1)}\right) a_{p}(r) \mathbf {1}_{[\theta ,1]}(r)\in \mathbb {K}_{\infty ,\theta }^{(p)}, \end{aligned}$$

where \(a_{p}(r)\) is a polynomial of degree p. For example,

In \(w_{\infty ,\theta }^{(1)}(r)\):

  • for \(\theta =0.1\), \(a_{1}(r)\approx -759.2781934172483\,r+446.2604260472818\);

  • for \(\theta =0,\)  \(a_{1}(r)\approx -261.5195892865372\,r+145.7876577089403.\)

In \(w_{\infty ,\theta }^{(2)}(r)\):

  • for \(\theta =0.1\), \(a_{2}(r)\approx 14317.969703708994\,r^{2}-16509.044867497203\,r+4480.224717878304\);

  • for \(\theta =0,\)  \(a_{2}(r)\approx 3196.1015220946833\,r^{2}-3457.6211113812255\,r+852.9832518883903.\)

Additionally, we also used the following averaging kernel in \(\mathbb {K}_{1,\theta }^{(1)}\), \(0\le \theta <1\):

$$\begin{aligned} w_{sc,\theta }^{(1)}(r)&=\frac{\pi }{(1-\theta )^{2}}b_{0}(t,\theta )b_{1}(t,\theta )\mathbf {1}_{[\theta ,1]}(r),\\ b_{0}(t,\theta )&=\sin \left( \frac{\pi (t-1)}{2(\tau -1)}\right) \cos ^{3}\left( \frac{\pi (t-1)}{2(\tau -1)}\right) ,\\ b_{1}(t,\theta )&=7+\theta +(15+9\theta )\cos \left( \frac{\pi (t-\theta )}{1-\theta }\right) . \end{aligned}$$

We further point out that \(w_{sc,\theta }^{(1)}\) is actually twice continuously differentiable for \(r\le \theta \), allowing for potentially more control of the targeted integrands’ singularity at \(r=0.\) See Fig. 2 for the graphs of two of the kernels discussed.

Fig. 2
figure 2

The graphs of two kernels

2.4 Extrapolative boundary integral methods (EIBIMs)

With the approach proposed in Sect. 2.3, applied to (1.7), we formulate an EIBIM for Helmholtz–Neumann problem (1.1):

figure b

Equations (2.18) and (2.19) can be discretized easily on uniform Cartesian grids as in (2.12) and in (2.13).

2.4.1 The choice of \(\xi \)

The choice of \(\xi \) affects the condition number for the inversion matrix. It is shown in [10] that the optimal parameter for two-dimensional problems is

$$\begin{aligned} \xi \approx {\left\{ \begin{array}{ll} \Big (\pi ^{2}+4(\ln \frac{k}{2}+\gamma )^{2}\Big )^{-\frac{1}{2}}, &{} \;k\le 8,\\ 0.5k, &{} \;k>8, \end{array}\right. } \end{aligned}$$
(2.20)

where \(\gamma \approx 0.5772\dots \) is the Euler’s constant. For Helmholtz equation in \({\mathbb {R}}^{3}\), the optimal choice is

$$\begin{aligned} \xi \approx {\left\{ \begin{array}{ll} \xi _{0}\in [\frac{1}{2},1], &{} \;k\le 8,\\ 0.5k, &{} \;k>8. \end{array}\right. } \end{aligned}$$
(2.21)

We use the above choices of \(\xi \) in the numerical experiments. However, the optimal choice of \(\xi \) for EIBIMs may not be the same as the parametrized cases analyzed in [10]. This is a topic that we leave for a future paper.

3 Numerical examples

In this section, we present numerical simulations that reveal the properties of the proposed methods. We present results in two and three dimensions, using simple shapes on which we can compare the numerical solutions with the analytical solutions, and using two standard non-convex shapes. In the examples presented in Sects. 3.3 and 3.4, the results are obtained using the distance functions computed on the grids via the standard second-order level set reinitialization algorithm [16].

3.1 Tests with a unit circle

We first test an implementation of the EIBIM for \(\varOmega \) being a unit disk in two dimensions. In polar coordinates, we set

$$\begin{aligned} u^{\mathrm{tot}}(r,\theta ):=u^{\mathrm{inc}}(r,\theta )+u^{\mathrm{sc}}(r,\theta ),\quad u^{\mathrm{inc}}=aJ_{\nu }(kr) e^{i\nu \theta },\quad u^{\mathrm{sc}}=bH_{\nu }^{(1)}(kr)e^{i\nu \theta }, \end{aligned}$$

where \(H_{\nu }^{(1)}\) denotes the first kind Hankel’s function of degree \(\nu .\) Then, the sound hard condition at \(r=1\) corresponds to

$$\begin{aligned} \frac{\partial u^{\mathrm{tot}}}{\partial n}(r=1,\theta )=0\implies b=-\frac{aJ_{\nu }^{\prime }(k)}{H_{\nu }^{(1)\prime }(k)}. \end{aligned}$$

In our simulations, we set \(\nu =1\), \(a=10\) and \(k=2.4048255577\), and \(u^{\mathrm{sc}}\) is the solution of (1.1) with

$$\begin{aligned} g(x)=-akJ_{\nu }^{\prime }(kr). \end{aligned}$$

The chosen value of k is very close to a resonant wave number [19]. We present the numerical errors at \(x_{1}=(20,20).\) In the simulation, the regularization parameter \(r_{0}\) in (2.9) is set uniformly to \(10^{-10}\) and the kernels used are \(w_{\infty ,0.1}^{(1)}\) and \(w_{\infty ,0.1}^{(2)}\). If the regularization and the quadrature errors are negligible, then we expect to see errors that scale as \(\epsilon ^{2}\) and \(\epsilon ^{3}.\)

Fig. 3
figure 3

Unite circle. (Top) \(\epsilon =C_{0}(\Delta x/\Delta x_{0})^{0.95}\), \(C_{0}=15\). When the averaging kernels are well resolved, numerical errors are dominated by the moment errors \(E_{ker}\), which are \(\mathcal {O}(\Delta x^{1.9})\) and \(\mathcal {O}(\Delta x^{1.5})\), respectively, for \(\epsilon =\tilde{C}_{0}\Delta x^{0.95}\) and \(\epsilon =\tilde{C}_{1}\Delta x^{0.75}\) for \(w_{\infty ,0.1}^{(1)}\), and \(\mathcal {O}(\Delta x^{2.85})\) and \(\mathcal {O}(\Delta x^{2.25})\) for \(\epsilon =\tilde{C}_{0}\Delta x^{0.95}\) and \(\epsilon =\tilde{C}_{1}\Delta x^{0.75}\) for \(w_{\infty ,0.1}^{(2)}\). (Bottom) \(\epsilon =C_{0}(\Delta x/\Delta x_{0})^{0.75}\). \(C_{0}=4.\) Averaging kernels with more vanishing moments are not necessarily more effective. In this case, the errors computed by \(w_{\infty ,0.1}^{(2)}\) are dominated by \(E_{\text {quad}}\), while with the same number of points discretizing \(\epsilon \), the errors computed by \(w_{1,0.1}^{(1)}\) are dominated by \(E_{\text {ker}}\)

In Fig. 3, we show numerical convergence studies of two different scalings: \(\epsilon =\tilde{C}_{0}\Delta x^{0.95}\) and \(\epsilon =\tilde{C}_{1}\Delta x^{0.75}.\) The purpose is to show that

  1. 1.

    When the system is well resolved by the grid, the errors are dominated by \(E_{\text {ker}}\). From \(w_{\infty ,0.1}^{(1)}\), \(E_{\text {ker}}\sim \mathcal {O}(\Delta x^{1.9})\) and \(\mathcal {O}(\Delta x^{1.5})\), respectively, for \(\epsilon =\tilde{C}_{0}\Delta x^{0.95}\) and \(\epsilon =\tilde{C}_{1}\Delta x^{0.75}.\) From \(w_{\infty ,0.1}^{(2)}\), \(E_{\text {ker}}\sim \mathcal {O}(\Delta x^{2.85})\) and \(\mathcal {O}(\Delta x^{2.25})\) for \(\epsilon =\tilde{C}_{0}\Delta x^{0.95}\) and \(\epsilon =\tilde{C}_{1}\Delta x^{0.75}.\) This regime is verified by the upper subfigure in Fig. 3.

  2. 2.

    Higher-order kernels, i.e., kernels having higher vanishing moments, do require more grid points as the constant in the quadrature error terms are larger. Thus in practice, lower-order kernels may out perform the higher-order kernels. This case is shown in the lower subfigure in Fig. 3, where we see that \(w_{\infty ,0.1}^{(2)}\) yields worse errors, with a rate that is lower than the theoretical rate of \(E_{\text {ker}}\), while the lower-order kernel \(w_{\infty ,1}^{(1)}\) yields comparatively smaller errors that decay at the theoretical rate of \(E_{\text {ker}}.\)

Table 1 Condition numbers of the IBIM linear systems (2.12) and of the EIBIM (2.18)

In Table 1, we compare the condition numbers of the linear systems obtained from discretizing IBIM and EIBIM. We first remark that the condition number for IBIM system appears to increase very fast for k being very close to a resonant wave number, whereas the condition numbers of EIBIM linear systems at this wave number decay slightly as the grid resolution increases. We see that when the wave number is not a resonant mode, the EIBIM matrices tend to have larger but stable condition numbers. This is probably due to the extrapolative nature of the formulation, as we have observed that averaging kernels with more vanishing moments tend to result in matrices with larger condition numbers. See Table 2.

Table 2 Condition numbers of matrices formed by using different kernels

3.2 Tests with a unit sphere

We present the errors in the numerical computation for \(\varOmega \) being a unit sphere centered at the origin. With the radially symmetric solution

$$\begin{aligned} u(r)=\frac{\exp (ikr)}{kr},\quad r=|x|, \end{aligned}$$

we set the Neumann boundary condition

$$\begin{aligned} g(x)\equiv -\frac{\exp (ikr)}{kr^{2}}+i\frac{\exp (ikr)}{r},\quad r=1. \end{aligned}$$

We compare the solution to the exterior Neumann–Helmholtz problem using IBIM on the single-layer potential and EIBIM on the combination of single- and double-layer potentials using a non-eigenvalue \((k=1)\) and an eigenvalue wave number \(k=\pi \). For simulation with EIBIM, we use the kernel \(w_{sc,\theta }^{(1)}\) with \(\theta =\sqrt{\Delta x}/10\); for IBIM, we use the cosine kernel as defined in (2.14). In the simulations, the regularization parameter \(r_{0}\) is set to \(2\Delta x\). The widths of the tubular neighborhood, \(\epsilon \), are chosen to be proportional to \(\sqrt{\Delta x}\); furthermore, they are adjusted so that for each grid, the total number of unknowns for IBIM is roughly the same as that for EIBIM. We illustrate the errors in Fig. 4. While both kernels are continuously differentiable and have one vanishing moment, we observe from the data that for \(k=1\), the EIBIM errors seem to be dominated by the quadrature errors; more specifically, \(E_{\text {quad}}\) seems to be dominated by the errors near \(d_{\varOmega }(x)\approx 0\) where \(w_{sc,\theta }^{(1)}\) is actually twice continuously differentiable. The IBIM errors show a rate in between that of \(E_{\text {ker}}\sim \mathcal {O}(\Delta x)\) and of \(E_{\text {quad}}\sim \mathcal {O}(\Delta x^{2}).\) For \(k=\pi \), IBIM simulations do not converge, while the errors in EIBIM seem to be closer to the theoretical rate of its kernel error \(E_{\text {ker}}.\)

Fig. 4
figure 4

Comparisons of the relative errors computed by EIBIM and IBIM for solving Helmholtz–Neumann problem in three dimensions. Left \(k=1\). Right \(k=\pi \)

Fig. 5
figure 5

(Left) The “Kite” shape. (Right) The bean shape. The interface is the zero set of \(\phi (x,y,z)=9\left( 1.6x+\left( \frac{y}{1.6}\right) ^{2}\right) ^{2}+\left( \frac{y}{1.5}\right) ^{2}+\left( \frac{z}{1.5}\right) ^{2}-10\)

3.3 Tests using a non-convex “kite” shape

We test our solution for the domain containing the kite shape described by

$$\begin{aligned} \mathbf {y}(s)=(\cos (s)+0.65\cos (2s)-0.65,\,1.5\sin (s)),\;\;0\le s\le 2\pi , \end{aligned}$$
(3.1)

and compare our solution with the solutions given in [9] using accurate regularization with explicit parametrization. The shape is illustrated in Fig. 5. The incident wave is \(u^{i}=e^{ik\nu \cdot x}\), where \(\nu =(1,0)\). In [9], it was shown that the scattered wave in the far-field satisfies

$$\begin{aligned} u^{s}(x)=\frac{e^{ik|x|}}{\sqrt{x}}\left( u_{\infty }(\hat{x})+\mathcal {O} \left( \frac{1}{|x|}\right) \right) ,\quad |x|\rightarrow \infty , \end{aligned}$$

where \({\hat{x}=x/|x|}\),

$$\begin{aligned} u_{\infty }(\hat{x})=\frac{e^{-\frac{i\pi }{4}}}{\sqrt{8\pi k}}\int _{\varGamma } \big (k\hat{x}\cdot n_{y}+\xi \big )e^{-ik\hat{x}\cdot y}\beta (y)\,\hbox {d}s(y), \end{aligned}$$

and \(\beta \) is the resolved density using combined double- and single- layer potential with combination parameter \(\xi \). We report the errors in the computed solutions for \(k=5\) in Table 3. The computation was performed with the kernel \(w_{sc,\theta }^{(1)}\) defined in Sect. 2.3.1, with \(\epsilon =\sqrt{\Delta x}\), and \(\theta =\sqrt{\Delta x}/2.\)

Table 3 Far-field solution errors on 2D kite shape described in (3.1)

3.4 Scattering in three dimensions by a “Bean” shape

We test the proposed algorithm on a non-convex “bean shape” scattering surface in three dimensions as shown in Fig. 5. The incident wave is taken to be \(u_{inc}=e^{ik\nu \cdot x}\), where \(\nu =(-1,0,0)\). In Table 4, we compare the solutions computed by IBIM, using the kernel \(w^{cos}_\epsilon \), and EIBIM, using the averaging kernel \(w_{sc,\theta }^{(1)},\) with \(\theta =\sqrt{\Delta x}/10\) and \(\epsilon =\sqrt{\Delta x}\).

Table 4 Differences in the solutions computed by EIBIM and IBIM using different mesh sizes

4 Summary

We proposed two new types of numerical methods (abbreviated above as IBIMs and EIBIMs) for solving Helmholtz equation in unbounded domains \({\mathbb {R}}^{m}{\setminus }\bar{\varOmega }\) with Neumann boundary conditions on \(\bar{\varOmega }.\) What distinguish the proposed algorithms from other related ones are: (1) the use of implicit representation of \(\varOmega \) and \(\partial \varOmega \) with out the need of explicit parametrization; (2) in the case in which hypersingular kernels are involved, EIBIMs rely on built-in extrapolation instead of solving the equivalent equations that involve partial derivatives of the unknown densities. The proposed algorithms can be easily implemented on Cartesian grids; however, a drawback is the larger linear systems resulting from the need to resolve the averaging kernel. In this regard, the algorithms are not intended to compete with the conventional high-order accurate methods for fixed and well- parameterized geometries. Our algorithms are more suitable for applications in which one needs to solve the Helmholtz equation as \(\partial \varOmega \) goes through significant change in its shape and topology—applications for which implicit representation of the geometries is a natural choice.

Acknowledgements

The authors are partially supported by Simons Foundation, NSF Grants DMS-1318975, DMS-1217203, and ARO Grant No. W911NF-12-1-0519. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing the computing resources that have contributed to the research results reported within this paper.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.