Spectral Galerkin Method for Solving Helmholtz and Laplace Dirichlet Problems on Multiple Open Arcs

We present a spectral numerical scheme for solving Helmholtz and Laplace problems with Dirichlet boundary conditions on an unbounded non-Lipschitz domain 
$$\mathbb {R}^2 \backslash \overline {\Gamma }$$

 
 
 ℝ
 
 
 2
 
 
 ∖
 
 
 Γ
 
 ¯
 
 , where Γ is a finite collection of open arcs. Through an indirect method, a first kind formulation is derived whose variational form is discretized using weighted Chebyshev polynomials. This choice of basis allows for exponential convergence rates under smoothness assumptions. Moreover, by implementing a simple compression algorithm, we are able to efficiently account for large numbers of arcs as well as a wide wavenumber range.


Introduction
We seek solutions of Helmholtz and Laplace equations in a two-dimensional plane after removing a finite collection of open finite curves-also called arcs. This setting can be found in areas such as structural and mechanical engineering [2], or biomedical imaging [11] to name a few. Such problems pose the following challenges: (1) unbounded domains, which call for boundary integral methods with carefully chosen radiation conditions; (2) singular behaviors of solutions near arc endpoints; and (3) large number of degrees of freedom when the wavenumber or number of arcs increase.
Our approach is to recast the problem as a system of boundary integral equations defined on the arcs, so as to obtain an integral representation of the volume solution. Well-posedness for a single arc was proven in [9], with an extension to the multiple arcs case given in [5]. We will consider numerical approximations of the resulting surface densities based on Galerkin-Bubnov discretizations of the corresponding system of boundary integral equations.
In the present note, we start by briefly introducing a spectral scheme to account for general arcs as well as for a wide wavenumber range. We show that significant reduction in both memory consumption and computational work can be achieved by an ad hoc matrix compression algorithm. Moreover, we establish detailed interde-pendencies between compression parameters and accuracy. Numerical experiments validate our claims and point out further improvements.

Continuous Model Problem
Let the canonical domain (−1, 1) × {0} be denoted by . We say that g : → C is ρ-analytic if the function t → g(t, 0) can be extended to an analytic function on the Bernstein ellipse of parameter ρ > 1 (cf. [10,Chapter 8]). We say that ⊂ R 2 is a regular Jordan arc of class C m , for m ∈ N, if it is the image of a bijective parametrization, denoted by r = (r 1 , r 2 ), such that its components are C m ( )-functions, r : → and r (t) 2 > 0, ∀ t ∈ , where · 2 is the Euclidean norm. Similarly, we define ρ-analytic arcs as those whose components are ρ-analytic. Throughout, we will assume that for any regular Jordan arc, there exists an extension of to˜ , which is a closed and keep the same regularity.
Consider a finite number M ∈ N of at least C 1 -arcs, written { i } M i=1 , such that their closures are mutually disjoint. Moreover, we assume that there are disjoint domains i whose boundaries are given by extensions ∂ i = i , for i = 1, . . . , M. Let us define We say that is of class C m , m ∈ N, if each arc i is of class C m and analogously for the ρ-analytic case. For i ∈ {1, . . . , M}, let r i : → i and g i : i → C. We claim Finally, H s 0 (G) refers to mean-zero spaces [5, Section 2.3]. We will also make use of the following Hilbert space in R 2 : where n i (y) is the unitary vector with direction (r i,2 (t), −r i,1 (t)) and t such that r(t) = y. For a function u defined in an open neighborhood of i such that γ + i u = γ − i u, we denote γ i u := γ ± i u.
For κ ≥ 0, we can express U solution of Problem 1 as where denotes the single layer potential generated at a curve i with G κ the corresponding fundamental solution, defined as in [8, . Also, it displays the desired behavior at infinity as long as each λ i lies in the right functional space [5,Section 4]. In order to find the surface densities λ i , we take Dirichlet traces γ ± i of the SL j and impose boundary conditions (2). This naturally defines of weakly singular boundary integral operators: and an equivalent boundary integral equation problem to Problem 1.

Spectral Discretization
We present a family of finite dimensional subspaces in H − 1 2 ( ) that can be used to approximate the solution of Problem 2 (cf. [4,6]). Let T N ( ) denote the space spanned by first kind Chebyshev polynomials, denoted by {T n } N n=0 , of degree lower or equal than N on , orthogonal with the L 2 (−1, 1) inner product, under the weight w −1 with w(t) := √ 1 − t 2 . Now, let us construct elements p i n = T n • r −1 i over each arc i spanning the space T N ( i ). For practical reasons, we define the normalized space: We account for edge singularities by multiplying the basis {p i n } N n=0 by a suitable weight: By Chebyshev orthogonality, we can easily define the mean-zero subspace With these definitions, we set the discretization space for a Galerkin-Bubnov solution of Problem 2 as

Problem 3 (Linear System)
For κ > 0, let N ∈ N and g ∈ H 1 2 ( ) be the same as in Problem 2. Then, we seek coefficients There, L ij [κ] is the weakly-singular operator whose kernel is parametrized by r i , r j and right-hand g = (g 1 , . . . , g M ) ∈ C M(N+1) with components Moreover, if and g are ρ-analytic with ρ > 1, we have the following superalgebraic convergence rates where C( , κ) is a positive constant, which does not depend on N .
Remark 1 Observe that the constants C( , κ) and N 0 depend on the geometry and frequency. To the best of our knowledge previous convergence results for 2D arcs are somehow limited. For intervals, the result was established in [6] whereas for more general arc results are only obtained for the Laplace case [1]. Super-algebraic convergence rates can be achieved by the method detailed in [3], though their scheme is limited to intervals and to the case of elliptic problems (N 0 = 0). More complex cases are still an open problem.

Numerical Implementation and Compression Algorithm
Before fleshing out our proposed compression technique, we explain how L[κ] and g of Problem 3 are computed. For the right-hand side, one must compute integrals of the form: which corresponds to Fourier-Chebyshev coefficients of g(t) and can be approximated using the Fast Fourier Transform [10]. Computations for matrix terms L ij [κ] are split into two groups: (a) cross-interactions, where test and trial functions supports lie along curves i , j with i = j ; and (b) self-interactions, where both trial and test functions are defined on the same curve. As for cross-interactions the integral kernel is smooth, we use the same computational procedure for the righthand side. For self-interactions, the kernel function has a singularity that can be characterized as G k (r(t), r(s)) = (2π) −1 log |t − s|J 0 (k r(t) − r(s) 2 ) + G r (t, s), t = s, for t, s ∈ , where J 0 is the zeroth-order first kind Bessel function, and G r is a regular function. Thus, integration for the regular part is done as in the crossinteraction case, while integrals with the first term as kernel are obtained by convolution as integrals for log |t − s| are known (see [6,Remark 4.2]).
Yet, as κ increases, larger values of N will be required, and thus, the need to compress the resulting matrix terms. As stated in [10,Chapters 7 and 8], the regularity of a function controls the decay of its Fourier-Chebyshev coefficients. Hence, as the entries of the matrix L[κ] are precisely such coefficients, for a smooth kernel one observes fast decaying terms. This implies that we can select small blocks to approximate the matrix and obtain a sparse approximation by discarding the remaining entries, based on a predetermined tolerance > 0. Specifically, the kernel function is smooth when we compute cross-interactions. Let the routine Quadrature(l,m) compute the term (l, m) of this interaction matrix using a 2D Gauss-Chebyshev quadrature. Given a tolerance > 0, we minimize the number of computations needed by performing the following binary search: The algorithm returns the minimum number of columns required, N cols , by searching in the first row the minimum index such that the matrix entries' absolute value is lower than . The binary search is restricted to a depth L max ∈ N. The same procedure is used to estimate the number of rows, N rows , by executing a binary search in the first column. Once N cols and N rows are selected, we define N := max{N rows , N cols } and compute the block of size N × N as in the full matrix implementation.

Matrix Compression Algorithm
The matrix compression percentage will strongly depend on the regularity of the arcs involved. For ρ-analytic arcs, using [10, Theorem 8.1] we can prove the lower bound: where ϒ is an upper bound for the absolute value of the kernel in the corresponding Bernstein ellipse. However, since compression is done by a binary search, the bound for the compression rate depends on L max as Compression of self-interaction blocks does not follow the same ideas. In fact, these blocks can be characterized as two perturbations over the canonical case, = for κ = 0, leading to a diagonal matrix. Namely, these are 1. A low frequency perturbation caused by the mapping r i : → , similar to the cross-interaction case. 2. A frequency perturbation that creates banded matrices.
In order to reduce memory consumption-though not computational time-we discard the entries of the self-interaction matrices lower than the given tolerance.
As expected, matrix compression induces an extra error as it perturbs the original linear system solved by λ N in Problem 3. We denote by L [k] the matrix generated by the compression algorithm with tolerance , and define the matrix difference . We seek to control the solution u = u + u of where u and g are the same as in Problem 3. In order to bound this error, we will assume that, for every pair of indices (i, j ) in the matrix L[k], we have, Theorem 3 Let N ∈ N be such there is only one λ N solution of Problem 3. Then, there is a constant C( , κ) > 0, not depending on N , such that Proof By [7, Section 1.13.2] we have that and thus, we need to estimate The bound for the first term is direct from (5) and matrix norm definitions. By the classical bound of a matrix inverse and the continuity of the associated boundary integral operator, it holds that from where the result follows directly.
We can also estimate the error introduced by the compression algorithm in terms of the energy norm. In order to do so, define (λ N ) i := N m=0 (u i ) m q i m in i . By the same arguments in the above proof, we obtain where g is the same that in Problem 2 and C 1 (κ, Γ ), C 2 (κ, Γ ) are two different constants.
Remark 2 Our compression algorithm produces a faster and less memory demanding implementation of the spectral Galerkin method at the cost of accuracy loss, similar to fast multipole or hierarchical matrices methods. Moreover, once we have compressed the matrix, we can implement a fast matrixvector product.

Numerical Results
To illustrate the above claims, Fig. 1 presents convergence results for different wavenumbers, κ = 0, 25, 50, 100 for a configuration of M = 28 arcs. As the chosen geometry and excitation are given by analytic functions, Theorem 2 predicts exponential rate of convergence as observed numerically. Table 1 provides matrix compression results for κ = 100 and for the same geometry of Fig. 1. It presents the percentage of non-zero entries (%NNZ) and relative errors as bounded in Theorem 3 as functions of the maximum level of binary  search (L max ), tolerances ( ), and polynomial order per arc (Order). For low orders (Order < 60), relative errors are quite large, and therefore, most of the matrix terms are kept. This is due to an insufficient number of matrix entries to solve the problem with good accuracy (see Fig. 1), rendering compression pointless. On the other hand, once convergence is achieved, the compression error drastically decreases along with the percentage of matrix terms stored.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.