for the Multi-trace Formulation of Scattering Problems

. We present a highly parallel version of the boundary element method accelerated by the adaptive cross approximation for the eﬃcient solution of scattering problems with composite scatterers. Individual boundary integral operators are treated independently, i


Introduction
Boundary integral equations present a valuable tool for the description of natural phenomena including wave scattering problems. Their numerical counterpart, the boundary element method (BEM), has become an alternative to volume based approaches such as finite element or finite volume methods. Except for a natural transition of the problem to the skeleton of the computational domain, BEM has found its place in HPC implementations of PDE solvers. One of the reasons is high computational intensity of system matrix assemblers, which fits well with today's design of HPC clusters, where memory accesses are much more costly than arithmetic operations. To overcome the quadratic complexity of BEM, several methods have been proposed including the fast multipole [4,15], or adaptive cross approximation (ACA) [2,16,17] methods, with the latter one utilized in this paper.
To describe wave scattering in a composite scatterer and its complement we utilize the local version of the multi-trace formulation (MTF) as introduced in [5][6][7] and presented here briefly in Sect. 2. The formulation leads to a block matrix with individual boundary integral operators for every homogeneous subdomain and coupling matrices on the off-diagonals. Although not described in detail in this paper, MTF allows for a natural operator preconditioning. In Sect. 3 we propose a strategy to parallelize the assembly of the MTF matrix blocks and their application in an iterative solver based on the approach presented in [11][12][13] for single domain problems. Except for the distributed parallelism, the method takes full advantage of the BEM4I library [14,20,21] and its assemblers parallelized in shared memory and vectorized by OpenMP. We provide the results of numerical experiments in Sect. 4.

Multi-trace Formulation
In the following we consider a partitioning of the space into n + 1 Lipschitz domains with Ω 0 denoting the background unbounded domain, n i=1 Ω i representing a composite scatterer, and the skeleton Σ. For a given incident field u inc satisfying −Δu inc − κ 2 0 u inc = 0 in R 3 , we aim to solve the equation for u = u sc + u inc , with the transmission conditions with u i , t i denoting the Dirichlet and Neumann traces of u| Ωi . Note that the normal vector n i is always directed outside of Ω i , see Fig. 1. Due to (2.1) and u inc satisfying the Helmholtz equation 16,18] with the single-and double-layer potentials respectively, and the fundamental solution v κ (x, y) = exp(iκ|x−y|)/(4π|x−y|).
Observing that results in the relation between the traces of u| Ωi 3) with the boundary integral operators defined as the composition of the trace operators γ D i : and the potentials, i.e.
The idea of MTF is to replace the identity part of (2.3) by the transmission conditions (2.2). To this end we define the operators The variational formulation for the situation in Fig. 1 For a definition of the above Sobolev spaces on manifolds we refer the interested reader to [1,7,18,19] and references therein. To discretize the system we decompose the skeleton Σ into plane triangles τ k and define the discrete function space span(ϕ ) of globally continuous piecewise linear functions to approximate all the above spaces. The discrete system thus reads ⎡ and the duality pairings The local indices k, in (2.4) span over all globally defined basis functions supported on the respective interfaces Γ i or Γ i,j .

Parallel ACA
The matrices produced by a classical BEM are usually dense. Although the number of degrees of freedom is smaller compared to the volume-based methods (e.g., finite element method), some of the so-called fast BEM have to be applied in order to solve large-scale problems. These methods are usually based on a hierarchical clustering of the underlying mesh and approximations of matrix blocks corresponding to pairs of sufficiently separated clusters. Here we use the adaptive cross approximation (ACA) method since it is a purely algebraic approach and once properly implemented, it can be used for various kind of problems.
After hierarchical clustering, ACA proceeds by approximating sub-matrices corresponding to pairs of well-separated (admissible) clusters by a product of two low-rank matrices. Non-admissible clusters are assembled as dense matrices. Due to a limited scope of this paper, we refer the reader to [3,16] for more details. The parallelization of the method based on the cyclic graph decomposition was presented in [13] where only certain special numbers of processors were discussed. In [12] we further extended the approach to support general number of processors. In the following section we recollect its basic principle and extend it to support the solution of MTF systems.
Let us first briefly describe the parallelization of a single-domain problem. To distribute BEM system matrices among P processors we decompose the input surface mesh Γ into P disjoint sub-meshes Γ 1 , Γ 2 , . . . , Γ P of approximately the same number of elements using, e.g. the METIS library [10]. In this manner we obtain a P × P block structure of matrices such that the block in row i and column j is induced by the integration over Γ i and Γ j . Afterwards, the P 2 blocks are assembled in parallel via P MPI processes. The workload distribution follows the so-called cyclic graph decomposition introduced in [13] for special values of P and later generalized in [12]. This way, each MPI process requires O(n/ √ P ) mesh data for the assembly of the blocks and actively works with O(n/ √ P ) degrees of freedom during matrix-vector multiplication, which has a positive effect on the efficiency of the required MPI synchronization phase. See Fig. 2 for an example of an 8 × 8 block distribution together with the respective generator graph. The extension of the proposed parallelization scheme to local multi-trace operators can be implemented in a straight-forward manner. Let Ω 0 , Ω 1 , · · · , Ω m denote the subdomains with their respective boundaries Γ 0 , Γ 1 , · · · , Γ m . We apply our parallel ACA scheme to each subdomain individually, i.e. each Γ j is split into P submeshes. The BEM operators K κj ,h , V κj ,h , D κj ,h are treated as P × P block operators, each assembled in parallel according to the corresponding cyclic graph decomposition. This approach works reasonably well, however, distributing each boundary among the same number of processes may lead to a poor parallel efficiency in the case of Γ j with small number of elements. Thus, the goal of our future research is to design an advanced parallel scheme for the assembly of local operators such that K κj ,h , V κj ,h , D κj ,h are assembled by various numbers of MPI processes.

Numerical Experiments
Our main interest lies in the study of the parallel matrix assembly and efficiency of the matrix-vector multiplication. The results of strong and weak scalability experiments are presented in Tables 1 and 2, respectively. The numerical experiments were carried out on the Salomon supercomputer at IT4Innovations National Supercomputing Center, Czech Republic. The cluster is composed of 1 008 compute nodes, each of them equipped with two 12core Intel Xeon E5-2680v3 processors and 128 GB of RAM. The nodes are interconnected by the 7D enhanced hypercube InfiniBand network. The theoretical peak performance totals 2 PFlop/s. We used METIS 5.1.0 for the decomposition of the mesh [10] and Intel Parallel Studio 2017.1.132 with Intel Math Kernel Library (MKL) as a BLAS implementation. We use two MPI processes per compute node, 12 OMP threads per one MPI process and set KMP_AFFINITY="granularity=fine,scatter". All the tests have been performed on a cubical geometry split into three parts, see Fig. 3 for a central cut of the domain and also the depiction of the resulting total field. The wave numbers are κ 0 = 4, κ 1 = 2, κ 2 = 5, κ 3 = 3 and μ 0 = μ 1 = μ 2 = μ 3 = 1. We used globally continuous piecewise linear trial and test functions for all operators included in the formulation. The parameters controlling the complexity of ACA were set to ε ACA = 10 −6 , μ ACA = 1.2. The relative precision for the GMRES solver was set to ε GMRES = 10 −6 .
The measured quantities are the time in seconds to assemble the matrices, the time to perform one matrix-vector multiplication without MPI synchronization, and also the overhead required to synchronize the results via MPI. We also present the efficiency of the parallel solver. We performed a series of five experiments and the presented values are the averages of the results.
In the case of strong scaling experiments (see Table 1), Γ 0 contains 258 048 elements and Γ 1 , Γ 2 , Γ 3 contain 110 592 elements each. Let the real time for parallel matrix assembly and matrix-vector multiplication on P processes be t P , then the efficiency of the strong scaling on P processes is calculated as (2t 2 )/(P t P ). Taking into account the relatively small size of the problem, we obtain a reasonable parallel efficiency of the system matrix assembly up to 64 compute nodes (128 processes). The matrix-vector multiplication scales relatively well up to 16 nodes (32 processes). The reason is twofold. The first major factor is increasing time required for vector synchronization after each multiplication. The second factor is the increasing density of the ACA matrices on higher number of processes. In the current implementation, our synchronization scheme is the following: -we split the outer vector into 4 disjoint intervals (one for each subdomain), -then we perform all necessary computations for each interval followed by a non-blocking allreduce across all processes, -to facilitate the progress of the non-blocking reductions, we perform periodic calls to MPI_Test on the master OMP thread. The weak scaling experiments were performed on 1, 4 and 16 compute nodes each running two MPI processes (see Table 2). On a single node, Γ 0 contains k 0 := 81 648 elements, and Γ 1 , Γ 2 , Γ 3 contain k 1 := 34 992 elements each. On P/2 nodes, the number of mesh elements is proportionally increased, i.e. P k 0 /2 for Γ 0 and P k 1 /2 for Γ 1 , Γ 2 and Γ 3 . Let the expected complexity of parallel matrix assembly and matrix-vector multiplication on P processes be e P := (k 0 log( P 2 k 0 )) + 3k 1 log( P 2 k 1 ))/2 and let the real time be t P , respectively. The efficiency of the weak scaling on P processes is then calculated as (t 2 e P )/(t P e 2 ).

Conclusion
We briefly presented a local multi-trace formulation for scattering problems with heterogeneous scatterers and applied the so-called cyclic graph decomposition to define a block-based workload distribution for distributed parallel matrix assembly and matrix-vector multiplication. We performed experiments on up to 64 compute nodes and presented strong and weak scaling properties of our approach. In our following work, we plan to refine and generalize the proposed methods to directly decompose the skeleton of the scatterer instead of the individual subdomains. We believe this could lead to a better scalability and ability to deal with problems with differently sized subdomains.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.