Abstract
Generation of appropriate computational meshes in the context of numerical methods for partial differential equations is technical and laborious and has motivated a class of advanced discretization methods commonly referred to as unfitted finite element methods. To this end, the finite cell method (FCM) combines highorder FEM, adaptive quadrature integration and weak imposition of boundary conditions to embed a physical domain into a structured background mesh. While unfortunate cut configurations in unfitted finite element methods lead to severely illconditioned system matrices that pose challenges to iterative solvers, such methods permit the use of optimized algorithms and data patterns in order to obtain a scalable implementation. In this work, we employ linear octrees for handling the finite cell discretization that allow for parallel scalability, adaptive refinement and efficient computation on the commonly regular background grid. We present a parallel adaptive geometric multigrid with Schwarz smoothers for the solution of the resultant system of the Laplace operator. We focus on exploiting the hierarchical nature of space tree data structures for the generation of the required multigrid spaces and discuss the scalable and robust extension of the methods across process interfaces. We present both the weak and strong scaling of our implementation up to more than a billion degrees of freedom on distributedmemory clusters.
Supported by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) in the collaborative research center SFB 837 Interaction Modeling in Mechanized Tunneling.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
In the context of numerical approximation of partial differential equations (PDE) for scientific and engineering applications alike, the generation of appropriate computational meshes is still one of the narrowest bottlenecks. This has given rise to isogeometric analysis (IGA) [18] on the one hand and unfitted finite element and meshfree methods [2] on the other. Although unfitted finite element methods encompass several classes, including the extended finite element method (XFEM) [3], cutFEM [5] and finite cell method (FCM) [10, 22], their common goal is to try to find the solution to the PDE without the need for a boundaryconforming discretization. As an unfitted finite element method, the finite cell method combines adaptive quadrature integration and highorder FEM together with the weak imposition of boundary conditions.
Although mesh generation is essentially circumvented, unfitted finite element methods face several challenges, the most conspicuous of which is illconditioning of the system matrix and imposition of essential boundary conditions [27]. The former issue limits the usability of many iterative solvers, which has led the majority of studies to focus on direct solvers. While direct solvers based on LU factorization have proven to be robust, their scalability suffers greatly due to poor complexity and concurrency [25]. Recently, a geometric multigrid preconditioner with a penalty formulation has been studied for the finite cell method [23] to formulate an efficient iterative solver.
On the other hand, unfitted FEM possesses characteristics that can be exploited to its advantage, especially for parallel computing. For instance, the computational mesh in unfitted FEM can normally be regular and Cartesian that in turn permits efficient computation and precomputation of finite element values. A parallel implementation of multilevel hpadaptive finite element with a shared mesh was recently applied to the finite cell method, employing a CG solver with an additive Schwarz preconditioner in [19] and AMG preconditioning in [20].
The main contributions of the present work can be summarized as follows:

We employ a fully distributed, spacetreebased discretization of the computation domain with low memory foot print to allow the storage and manipulation of large problems and adaptive mesh refinement (AMR)

We present the parallelization of the finite cell method with adaptive refinement, focusing on the scalability of different aspects of the computation via exploiting spacetree data structures and the regularity of the discretization

We formulate a scalable hybrid Schwarztype smoother for the treatment of cut cells to use in our geometric multigrid solver

We employ parallel adaptive geometric multigrid to solve largescale finite cell systems and focus on the processlocal generation of the required spaces and favorable communication patterns

We present the strong and weak scalability of different computational components of our methods
In Sect. 2, the FCM formulation of a model problem is set up. The geometric multigrid solver is formulated in Sect. 3. The developed methods are applied to a number of numerical experiments in Sect. 4. Finally, conclusions are drawn in Sect. 5.
2 Finite Cell Method
In the context of unfitted finite element methods, a given physical domain \(\varOmega \) with essential and natural boundaries \(\varGamma _{D}\) and \(\varGamma _{N}\), respectively, is commonly placed in an embedding domain \(\varOmega _{e}\) with favorable characteristics, such as axis alignment as shown in Fig. 1. Consequently, appropriate techniques are required for integration over \(\varOmega \) and imposition of boundary conditions on \(\varGamma _{D}\) and \(\varGamma _{N}\). In this work, we used the Poisson equation as model problem given by
where \(\varOmega \) is the domain, \(\varGamma = \varGamma _{D} \cup \varGamma _{N}\) is the boundary, \(\textit{\textbf{n}}\) is the normal vector to the boundary and u is the unknown solution.
2.1 Boundary Conditions
Natural Boundary Conditions. In the context of standard finite element method, natural boundary conditions are commonly integrated over the surface of those elements that coincide with the natural part of the physical boundary \(\varGamma _{N}\); however, in the general case, the physical domain does not coincide with cell boundaries in the context of the finite cell method. Therefore, a separate description of the boundary is necessary for integration of natural boundary conditions. Except for an appropriate Jacobi transformation from the surface space to volume space, integration of natural boundary conditions does not require special treatment.
Essential Boundary Conditions. The imposition of essential boundary conditions is a challenging task in unfitted finite element methods. Penalty methods [1, 4, 30], Lagrange multipliers [6, 12,13,14] and Nitsche’s method [7, 9, 11, 17, 21] are commonly used for this purpose. We use a stabilized symmetric Nitsche’s method with a local estimate for the stabilization parameter that has the advantage of retaining the symmetry of the system, not introducing additional unknowns and being variationally consistent. The weak form is therefore given by
where \(\lambda \) is the stabilization parameter. The computation of \(\lambda \) is further explained in Sect. 2.3.
2.2 Spatial Discretization
Unfitted finite element methods normally permit the use of a structured grid as the embedding domain. We employ distributed linear space trees [8] for the discretization of the finite cell space. Space tree data structures not only require minimal work for setup and manipulation, they also allow for distributed storage, efficient load balancing and adaptive refinement and have a small memory footprint. We make use of Morton ordering as illustrated in Fig. 2.
An attractive aspect of computation on structured spaces is the optimization opportunities it provides, which is exactly where unfitted methods can seek to benefit compared to their boundaryconforming counterparts. For example, we compute element size, coordinates and Jacobian transformation efficiently on the fly without caching during integration.
A natural repercussion of adaptive refinement on space tree data structures is the existence of hanging nodes in the discretized space as shown in Fig. 1. To ensure the continuity of the solution, we treat hanging nodes by distributing their contribution to their associated nonhanging nodes and removing them from the global system. The influence of hanging nodes is thereby effectively local and no additional constraint conditions or unknowns appear in the solution space.
2.3 Volume Integration
The physical domain is free to intersect the embedding domain. During volume integration, the portion of the embedding domain that lies outside of the physical domain, \(\varOmega _{e} \setminus \varOmega \), is penalized by a factor \(\alpha \ll 1\). This stage is essentially where the physical geometry is recovered from the structured embedding mesh. Therefore, cells that are cut by the physical boundary must be sufficiently integrated in order to accurately resolve the geometry. On the other hand, the accuracy of standard Gaussian quadrature is decidedly deteriorated by discontinuities in the integrand. Thus, methods such as Gaussian quadrature with modified weights and points [24] and uniform [22] and adaptive [10] refinement, also known as composed Gaussian quadrature, have been proposed for numerical integration in the face of discontinuities in the integrand.
We use adaptive quadrature for volume integration within the finite cell discretization. A number of adaptive integration layers are thereby added on top of the function space of \(\varOmega _{e}\) for cut cells as shown in Fig. 1. The concept of space tree data structures is congenial for adaptive quadrature integration as the integration space can readily be generated by refinement towards the boundary intersection. Furthermore, the integration space retains the regularity of the parent discretization. This scheme is especially suitable to our parallel implementation, where a given cell is owned by a unique process; therefore, the adaptive quadrature integration procedure is entirely performed process locally, and duplicate computations on the ghost layer are avoided.
Introducing a finitedimensional function space \(V_{h} \subset H^{1}(\varOmega _{e})\), the finite cell formulation of the model problem can be written as
with
where
The stabilization parameter drastically affects the solution behavior, and its proper identification is vital to achieving both convergence in the solver and the correct imposition of the boundary conditions. There are several methods, including local and global estimates, for the determination of the stabilization parameter [9, 15]. We employ a local estimate based on the coercivity condition of the bilinear form that can be formulated as a generalized eigenvalue problem of the form
where the columns of \(\textit{\textbf{X}}\) are the eigenvectors, \(\varvec{\varLambda }\) is the diagonal matrix of the eigenvalues, and \(\textit{\textbf{A}}\) and \(\textit{\textbf{B}}\) are formulated as
where \(\varGamma _{D}^{c}\) and \(\varOmega ^{c}\) are the portion of the essential boundary that intersects a given cell and the cell domain, respectively. The stabilization parameter can be chosen as \(\lambda > \text {max}(\varvec{\varLambda }\)). This formulation leads to a series of relatively small generalized eigenvalue problems. On the other hand, global estimates assemble a single, large generalized eigenvalue problem by integration over the entire domain. The local estimate is more desirable in the context of parallel computing since it allows for the processlocal assembly and solution of each problem. Moreover, most generalized eigensolver algorithms have nonoptimal complexities, and a smaller system is nevertheless preferred.
3 Geometric Multigrid
We employ a geometric multigrid solver [16] for the resultant system of the finite cell formulation. Unfitted finite element methods in general and finite cell in particular usually lead to the illconditioning of the system matrix due to the existence of cut elements, where the embedding domain is intersected by the physical domain [27]. Small cut fractions exacerbate this problem. Therefore, an efficient multigrid formulation requires special treatment of this issue. Nevertheless, the main components of geometric multigrid remain unaltered.
3.1 Grid Hierarchy
The hierarchical nature of space tree data structures allows for the efficient generation of the hierarchical grids required by geometric multigrid methods [26, 29]. We generate the grid hierarchy topdown from the finest grid. In order to keep the coarsening algorithm process local, sibling cells (cells that belong to the same parent) are kept on the same process for all grids. While the coarsening rules are trivial in the case of uniform grids, adaptively refined grids require elaboration. Starting from a fine grid \(\varOmega _{e,h_{l}}\), we generate the coarse grid \(\varOmega _{e,h_{l1}}\) according to Algorithm 1. Aside from keeping cell families on the same process, the only other major constraint is 2:1 balancing, which means that no two neighbor cells can be more than one level of refinement apart. In practice, load balancing and the application of the mentioned constraints are carried out in a single step. Figure 3 shows a sample fourlevel grid hierarchy with the finest grid adaptively refined towards a circle in the middle of the square domain.
3.2 Transfer Operators
Transfer operators provide mobility through the grid hierarchy, i.e., restriction from level l to \(l1\) and prolongation from level \(l1\) to l. In order to minimize communication and avoid costly cell lookup queries, we perform these operations in two steps. Restriction starts by transferring entities from the distributed fine grid \(\varOmega _{e,h_{l}}\) to an intermediate coarse grid \(\varOmega _{e,h_{l1}}^{i}\) followed by a transfer to the distributed coarse grid \(\varOmega _{e,h_{l1}}\). Conversely, prolongation starts by transferring entities from the distributed coarse grid \(\varOmega _{e,h_{l1}}\) to the intermediate coarse grid \(\varOmega _{e,h_{l1}}^{i}\) followed by a transfer to the distributed fine grid \(\varOmega _{e,h_{l}}\). The intermediate grids are generated and accessed entirely process locally and only store minimal information regarding the Morton ordering of the local part of the domain. A similar approach is taken in [29]. The restriction and prolongation operations of a vector \(\textit{\textbf{v}}\) can be summarized as
where \(\textit{\textbf{R}}_{l} = \textit{\textbf{P}}_{l}^{T}\). \(\mathcal {T}\) represents the transfer operator between intermediate and distributed grids.
This scheme allows \(\textit{\textbf{R}}\) and \(\textit{\textbf{P}}\) to be resolved in parallel, process locally and without the need for cell lookup queries. Additionally, flexible load balancing is achieved which is especially important for adaptively refined grids. The only additional component to establish effective communication between grids is the transfer operator \(\mathcal {T}\), which concludes the majority of the required communication.
3.3 Parallelized Hybrid Schwarz Smoother
Special treatment of cut cells is crucial to the convergence of the solver for finite cell systems. This special treatment mainly manifests itself in the smoother operator \(\mathcal {S}\) in the context of geometric multigrid solvers. We employ a Schwarztype smoother (e.g. [28], cf. also [19, 23]), where subdomains are primarily determined based on cut configurations: A subdomain is designated for every cut cell that includes all the functions supported on that cell. The remaining nodes, which do not appear in any cut cells, each compose a subdomain with only the functions supported on that node. The selection of subdomains is illustrated in Fig. 4. The Schwarztype smoother can be applied in two manners: additively and multiplicatively as given by
with
where \(\textit{\textbf{R}}_{s,i}\) are the Schwarz restriction operators, \(\textit{\textbf{A}}_{i} = \textit{\textbf{R}}_{s,i}\textit{\textbf{A}}\textit{\textbf{R}}_{s,i}^{T}\) are the subdomain matrices and n is the number of subdomains. The Schwarz restriction operator \(\textit{\textbf{R}}_{s,i}\) essentially extracts the rows corresponding to the functions of subdomain i, and its transpose, the Schwarz prolongation operator, takes a vector from the subdomain space to the global space by padding it with zeros.
Parallelization in the first approach is a relatively straightforward task. Each process can simultaneously apply the correction from the subdomains that occur on it, and within each process, subdomain corrections can be applied concurrently. Since any given cell is owned by a unique process, no communication is required during this stage. The only communication takes place when the correction is synchronized over process interfaces at the end.
Parallel realization of the latter approach however is a challenging task. Strict implementation of the multiplicative Schwarz method requires substantial communication not only for exchanging updated residual values but also for synchronizing the application of subdomain corrections, which is clearly not a desirable behavior for the parallel scalability of the algorithm; therefore, we employ a more compromised approach that adheres to the multiplicative application of the smoother as much as possible while minimizing the required communication. To this end, subdomains, whose support lies completely within their owner process are applied multiplicatively, while at process interfaces, the additive approach is taken. This application approach is demonstrated in Fig. 4.
4 Numerical Studies
We perform a number of numerical studies to investigate the performance of the methods outlined in the previous sections. We use the finite cell formulation developed in Sect. 2 and employ geometric multigrid from Sect. 3 as a solver. We consider both uniform and adaptive grids and present the weak and strong scaling of different components of the computation. The computations are performed on a distributedmemory cluster with dualsocket Intel Xeon Skylake Gold 6148 CPUs with 20 cores per socket at 2.4 GHz per core, 192 GB DDR4 main memory per node and a 100 GBit/s Intel OmniPath network interconnect via PCIe x16 Gen 3. All nodes run Red Hat Enterprise Linux (RHEL) 7, and GCC 7.3 with the O2 optimization flag is used to compile the project. All computations are performed employing MPI parallelization without additional sharedmemory parallelization and utilizing up to 40 MPI processes per node which equals the number of cores per node.
The physical domain considered in this benchmark example is a circle that is embedded in a unit square embedding domain throughout this section (see Fig. 3). The finite cell formulation of the Poisson equation is imposed on the embedding domain. An inhomogeneous Dirichlet boundary condition is imposed on an arch to the left of the circle and a homogeneous Neumann boundary condition is imposed on the remaining part. This example is chosen to act as a reproducible benchmark. The conditioning of finite cell matrices directly depends on the configuration of cells cut by the physical domain. The circular domain covers a wide variety of cut configurations on each grid level due to its curvature. Therefore, the resultant matrices include the illconditioning associated with the finite cell method and can represent more general geometries. Furthermore, other computational aspects, e.g., volume integration are virtually independent of the geometry and mainly vary with problem size.
The geometric multigrid solver is set up with three steps of pre and postsmoothing each, employing a combination of the hybrid multiplicative Schwarz smoother as in Sect. 3.3 and a damped Jacobi smoother. The Schwarz smoother is applied only to the three finest grids in each problem, and the damped Jacobi smoother is applied to the remaining grids. A tolerance of \(10^{9}\) for the residual is used as the convergence criterion.
The scaling studies report the runtime of different components of the computation. System setup \(T_{sys\_setup}\) includes setup and refinement of the main discretization, load balancing, setup of the finite cell space and resolution of the physical boundary. Assembly \(T_{assembly}\) is the time required for the assembly of the global system, i.e., integration and distribution of all components of the weak form. Solver setup \(T_{solver\_setup}\) concerns the generation and setup of the hierarchical spaces for geometric multigrid and includes the grid hierarchy, transfer operators and smoothers. Finally, \(T_{solver}\) and \(T_{iteration}\) refer to the total runtime and the runtime of a single iteration of the geometric multigrid solver, respectively.
A model with roughly 268 million degrees of freedom with uniform refinement and another model with roughly 16.8 million degrees of freedom with adaptive refinement towards the boundary are chosen to investigate the strong scalability of the computation as shown in Fig. 5. In both cases, the speedup of all components are compared to the ideal parallel performance. Ideal or perfect speedup is driven by Amdahl’s law and is defined as a speedup that is linearly proportional to the number of processing units, normalized to the smallest configuration that can accommodate the memory demand, i.e., 16 processes for the uniform grid and 4 processes for the adaptively refined grid. Except \(T_{assembly}\) that virtually coincides with ideal speedup, other components show slightly smaller speedups; however, these minor variations are practically inevitable in most scientific applications due to communication overhead and load imbalances. The strong scalability of all components can be considered excellent as there are no breakdowns or plateaus and the differences from ideal speedup remain small.
On the other hand, weak scalability is investigated for a number of uniformly refined grids, ranging in size from approximately 16.7 million to 1.07 billion degrees of freedom as shown in Fig. 6. In addition to keeping roughly constant the number of degrees of freedom per core, in order to study the scalability of the geometric multigrid solver, the size of the coarse problem is kept constant on all grids; therefore, a deeper hierarchy is employed for larger problems as detailed in Table 1. The convergence behavior of the multigrid solver is shown in Fig. 7. Within the weak scaling study, each problem encounters many different cut cell configurations on each level of the grid hierarchy. The observed boundedness of the iteration count is therefore a testament to the robustness of the approach. All components exhibit good weak scalability throughout the entire range. While \(T_{sys\_setup}\) and \(T_{assembly}\) are virtually constant for all grid sizes, \(T_{solver\_setup}\) and \(T_{solver}\) slightly increase on larger problems. The difference in \(T_{solver\_setup}\) can be imputed to the difference in the number of grid levels for each problem, i.e., larger problems with deeper multigrid hierarchies have heavier workloads in this step. On the other hand, \(T_{solver}\) has to be considered in conjunction with the iteration count. Although the multigrid solver is overall scalable in terms of the iteration count, there are still minor differences in the necessary number of iterations between different problems (see Fig. 7). \(T_{iteration}\) can be considered a normalized metric in this regard, which remains virtually constant. Nevertheless, the differences in runtime remain small for all components and are negligible in practical settings.
Although a direct comparison is not possible due to differences in formulation, problem type and setup, hardware, etc., we try to give a highlevel discussion on some aspects of the methods with respect to closely related works. In [23], a multigrid preconditioner with Schwarz smoothers was presented, showing bounded iteration counts. However, a parallelization strategy was not reported. In [20], a PCG solver with an AMG preconditioner was used. Similarly, a PCG solver with a Schwarz preconditioner was used in [19]. In both studies, a shared base mesh was employed. The size of the examples in [20] and [19] were smaller in comparison to the ones considered here, which further hinders a direct comparison; nevertheless, the multigrid solver presented in this work shows promising results both in terms of parallel scalability and absolute runtime for similarly sized problems. The multigrid solver is furthermore robust with respect to broad variations in problem size, whereas the iteration count of the PCG solver in [19] significantly increased for larger problems, which was directly reflected in the runtime. Geometric multigrid is used as a solver in this work. It is expected that even more robustness and performance can be gained if it is used in conjunction with a Krylov subspace accelerator, such as the conjugate gradient (CG) method.
5 Conclusions
A parallel adaptive finite cell formulation along with an adaptive geometric multigrid solver is presented in this work. Numerical benchmarks indicate that the core computational components of FCM as well as the GMG solver scale favorably in both weak and strong senses. The use of distributed spacetreebased meshes allows not only the scalable storage and manipulation of extremely large problems, but also effective load balancing, which is above all manifest in the perfect scalability of the integration of the weak form. Furthermore, the suitability of the spacetreebased algorithms to parallel environments for the generation of multigrid spaces is demonstrated by the scalability of the solver setup. The geometric multigrid solver with the Schwarztype smoother exhibits robustness and scalability both in terms of the required iteration count for different problem sizes and parallelization. We strive to minimize communication in the parallelization of the multigrid components, especially for the application of the Schwarz smoother; nevertheless, iteration counts do not suffer from parallel execution, and the solver shows good weak and strong scalability. The ability to solve problems with more than a billion degrees of freedom and the scalability of the computations are promising results for the application of the finite cell method with geometric multigrid to largescale problems on parallel machines. Nevertheless, further examples and problem types are necessary to extend the applicability of the presented methods. Moreover, the main algorithms and underlying data structures that are used in the presented methods are suitable to hardware accelerators such as GPUs and FPGAs, and we expect that a scalable implementation should be achievable for such architectures given optimized data paths and communication patterns. In particular, the semistructuredness of the adaptive octree approach is conducive to a hardwareoriented implementation compared to unstructured meshing approaches. We intend to explore these opportunities as a future work.
References
Babuška, I.: The finite element method with penalty. Math. Comput. 27(122), 221–228 (1973)
Belytschko, T., Chen, J.: Meshfree and Particle Methods. Wiley, Chichester (2009)
Belytschko, T., Moës, N., Usui, S., Parimi, C.: Arbitrary discontinuities in finite elements. Int. J. Numer. Meth. Eng. 50(4), 993–1013 (2001)
Burman, E.: Ghost penalty. C. R. Math. 348(21), 1217–1220 (2010)
Burman, E., Claus, S., Hansbo, P., Larson, M.G., Massing, A.: CutFEM: discretizing geometry and partial differential equations. Int. J. Numer. Meth. Eng. 104(7), 472–501 (2015)
Burman, E., Hansbo, P.: Fictitious domain finite element methods using cut elements: I. A stabilized Lagrange multiplier method. Comput. Methods Appl. Mech. Eng. 199(41–44), 2680–2686 (2010)
Burman, E., Hansbo, P.: Fictitious domain finite element methods using cut elements: II. A stabilized Nitsche method. Appl. Numer. Math. 62(4), 328–341 (2012)
Burstedde, C., Wilcox, L.C., Ghattas, O.: p4est: scalable algorithms for parallel adaptive mesh refinement on forests of octrees. SIAM J. Sci. Comput. 33(3), 1103–1133 (2011)
Dolbow, J., Harari, I.: An efficient finite element method for embedded interface problems. Int. J. Numer. Meth. Eng. 78(2), 229–252 (2009)
Düster, A., Parvizian, J., Yang, Z., Rank, E.: The finite cell method for threedimensional problems of solid mechanics. Comput. Methods Appl. Mech. Eng. 197(45–48), 3768–3782 (2008)
Embar, A., Dolbow, J., Harari, I.: Imposing Dirichlet boundary conditions with Nitsche’s method and splinebased finite elements. Int. J. Numer. Meth. Eng. 83(7), 877–898 (2010)
FernándezMéndez, S., Huerta, A.: Imposing essential boundary conditions in meshfree methods. Comput. Methods Appl. Mech. Eng. 193(12–14), 1257–1275 (2004)
Flemisch, B., Wohlmuth, B.I.: Stable Lagrange multipliers for quadrilateral meshes of curved interfaces in 3D. Comput. Methods Appl. Mech. Eng. 196(8), 1589–1602 (2007)
Glowinski, R., Kuznetsov, Y.: Distributed Lagrange multipliers based on fictitious domain method for second order elliptic problems. Comput. Methods Appl. Mech. Eng. 196(8), 1498–1506 (2007)
Griebel, M., Schweitzer, M.A.: A particlepartition of unity method part V: boundary conditions. In: Hildebrandt, S., Karcher, H. (eds.) Geometric Analysis and Nonlinear Partial Differential Equations, pp. 519–542. Springer, Heidelberg (2003). https://doi.org/10.1007/9783642556272_27
Hackbusch, W.: MultiGrid Methods and Applications, vol. 4. Springer, Heidelberg (2013). https://doi.org/10.1007/9783662024270
Hansbo, A., Hansbo, P.: An unfitted finite element method, based on Nitsche’s method, for elliptic interface problems. Comput. Methods Appl. Mech. Eng. 191(47–48), 5537–5552 (2002)
Hughes, T.J., Cottrell, J.A., Bazilevs, Y.: Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement. Comput. Methods Appl. Mech. Eng. 194(39–41), 4135–4195 (2005)
Jomo, J.N., et al.: Robust and parallel scalable iterative solutions for largescale finite cell analyses. Finite Elem. Anal. Des. 163, 14–30 (2019)
Jomo, J.N., et al.: Parallelization of the multilevel HPadaptive finite cell method. Comput. Math. Appl. 74(1), 126–142 (2017)
Nitsche, J.: Über ein variationsprinzip zur lösung von dirichletproblemen bei verwendung von teilräumen, die keinen randbedingungen unterworfen sind. Abh. Math. Semi. Univ. Hamburg 36, 9–15 (1971). https://doi.org/10.1007/BF02995904
Parvizian, J., Düster, A., Rank, E.: Finite cell method. Comput. Mech. 41(1), 121–133 (2007). https://doi.org/10.1007/s004660070173y
de Prenter, F., et al.: Multigrid solvers for immersed finite element methods and immersed isogeometric analysis. Comput. Mech. 65(3), 807–838 (2019). https://doi.org/10.1007/s0046601901796y
Rabczuk, T., Areias, P., Belytschko, T.: A meshfree thin shell method for nonlinear dynamic fracture. Int. J. Numer. Meth. Eng. 72(5), 524–548 (2007)
Saad, Y.: Iterative Methods for Sparse Linear Systems, vol. 82. SIAM, Philadelphia (2003)
Sampath, R.S., Biros, G.: A parallel geometric multigrid method for finite elements on octree meshes. SIAM J. Sci. Comput. 32(3), 1361–1392 (2010)
Schillinger, D., Ruess, M.: The finite cell method: a review in the context of higherorder structural analysis of cad and imagebased geometric models. Arch. Comput. Methods Eng. 22(3), 391–455 (2015). https://doi.org/10.1007/s118310149115y
Smith, B., Bjorstad, P., Gropp, W.: Domain Decomposition: Parallel Multilevel Methods for Elliptic Partial Differential Equations. Cambridge University Press, Cambridge (1996)
Sundar, H., Biros, G., Burstedde, C., Rudi, J., Ghattas, O., Stadler, G.: Parallel geometricalgebraic multigrid on unstructured forests of octrees. In: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, p. 43. IEEE Computer Society Press (2012)
Zhu, T., Atluri, S.: A modified collocation method and a penalty formulation for enforcing the essential boundary conditions in the element free Galerkin method. Comput. Mech. 21(3), 211–222 (1998). https://doi.org/10.1007/s004660050296
Acknowledgments
Financial support was provided by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) in the framework of subproject C4 of the collaborative research center SFB 837 Interaction Modeling in Mechanized Tunneling. This support is gratefully acknowledged. We also gratefully acknowledge the computing time on the computing cluster of the SFB837 and the Department of Civil and Environmental Engineering at Ruhr University Bochum, which has been employed for the presented studies.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Saberi, S., Vogel, A., Meschke, G. (2020). Parallel Finite Cell Method with Adaptive Geometric Multigrid. In: Malawski, M., Rzadca, K. (eds) EuroPar 2020: Parallel Processing. EuroPar 2020. Lecture Notes in Computer Science(), vol 12247. Springer, Cham. https://doi.org/10.1007/9783030576752_36
Download citation
DOI: https://doi.org/10.1007/9783030576752_36
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783030576745
Online ISBN: 9783030576752
eBook Packages: Computer ScienceComputer Science (R0)