# The domain interface method: a general-purpose non-intrusive technique for non-conforming domain decomposition problems

- 1.3k Downloads
- 3 Citations

## Abstract

A domain decomposition technique is proposed which is capable of properly connecting arbitrary non-conforming interfaces. The strategy essentially consists in considering a fictitious zero-width interface between the non-matching meshes which is discretized using a Delaunay triangulation. Continuity is satisfied across domains through normal and tangential stresses provided by the discretized interface and inserted in the formulation in the form of Lagrange multipliers. The final structure of the global system of equations resembles the dual assembly of substructures where the Lagrange multipliers are employed to nullify the gap between domains. A new approach to handle floating subdomains is outlined which can be implemented without significantly altering the structure of standard industrial finite element codes. The effectiveness of the developed algorithm is demonstrated through a patch test example and a number of tests that highlight the accuracy of the methodology and independence of the results with respect to the framework parameters. Considering its high degree of flexibility and non-intrusive character, the proposed domain decomposition framework is regarded as an attractive alternative to other established techniques such as the mortar approach.

## Keywords

Domain decomposition methods Non-conforming interface Weak coupling techniques for non-matching meshes## 1 Introduction

Modern engineering applications require sophisticated simulation techniques which deal with increasing complexity and refinement of the computational models. Consequently, detailed finite element discretizations are commonly used in nowadays structural analysis and a number of practical situations are emerging in which special techniques are indispensable to handle non-matching discretizations. In this introduction we focus on engineering applications and computational techniques concerning the assembly and resolution of models involving non-overlapping meshes.

### 1.1 The need for non-matching mesh assemblies in computational mechanics

Typical scenarios arise when independent mesh discretizations are applied to different parts of a structure and when large models are divided and distributed among different working teams. A common situation is encountered when particular structural components are reused in evolving designs such as the wings among diverse aircraft models with changing fuselages. The meshes of the structural components are most likely non-matching and need to be assembled along common edges using special techniques that account for the non-conforming interfaces [34, 35].

The field of contact mechanics [51] has significantly boosted the formulation of new assembly techniques since the most general situations between contact surfaces are encountered therein, e.g. sliding bodies over a surface or rolling and rebounding of different discretized entities. Other somehow related applications are connected with fluid structure interaction [7] and multiphysics [14, 41] analysis where different discretizations are taken into account due to the distinct physical nature of interacting components.

An emerging set of techniques intimately related with computational material design are the so-called multiscale and multiresolution methods [13, 17, 27]. The idea is to account for the lower scale components and their interactions with an upper scale level. Multiresolution techniques based on mesh adaptivity [33] and concurrent multiscale analysis such as global/local approaches [8], variational multiscale methods [25, 26, 36] and multiscale domain decomposition methods [15, 20, 28] are examples in which lower (fine) scale discretizations are glued to upper (coarse) scale models. This can be performed selectively during the computations at areas of interest, e.g. stress concentrations, crack growth and appearance of non-linear effects, by “zooming” into these regions and substituting a part of the domain discretization by its corresponding refined model [30, 31]. As a result, a number of non-conforming interfaces between different scale discretizations arise which need to be handled by appropriate techniques.

Most of the above mentioned applications involve complex models which upon discretization, e.g. using finite elements, lead to large systems of equations. It is then not surprising that most of the existing methodologies to connect different meshes are frequently encountered in domain decomposition techniques [19]. These techniques roughly divide the complex model into subdomains and distribute the corresponding calculations among different processor units. They can be viewed as powerful parallel solvers typically formed by a blend of direct solvers that account for the local domain factorizations and iterative solvers for an interface problem that accounts for the connectivity of all domains. It is precisely in the generation of such connectivity where the techniques discussed in this paper play a crucial role since they ensure continuity of the solution field across all conforming and non-conforming interfaces.

### 1.2 Non-overlapping domain decomposition analysis with non-conforming interfaces

Strong coupling constraints refer essentially to collocation techniques where one constraint is assigned to each degree of freedom (DOF) while weak coupling techniques refer to constraints enforced in an integral or average sense along portions of the interface surface assigning one constraint per group of DOFs belonging to the interface portion. In the contact domain literature these techniques are often referred to as node-to-segment techniques and segment-to-segment techniques, respectively.

*h*or

*p*refinement of the coarse mesh at the interface for the strong coupling case. For this reason, weak coupling techniques are preferred in the context of a general non-conforming interface since they relax the constraint such that stiffening and locking effects do not occur. Mortar methods and their formulation in terms of finite elements constitute the most general technique for non-conforming interfaces in which geometrical incompatibilities can be handled [5]. The most challenging applications of such techniques are identified in time-dependent domains, e.g. contact problems, and when highly refined domains are considered at local areas without the need for an expensive mesh adaption, e.g. multiresolution analysis.

The enforcement of constraints at a non-conforming interface is typically done with the introduction of Lagrange multipliers [1]. Basically, an extra term is added into the variational statement which corresponds to the work performed at the non-conforming interface in terms of the existing gap and interface stresses. In the context of the finite element method different discretizations are generated at each domain and distribution functions are associated with the Lagrange multipliers at the interface leading to a discretized weak form of the equilibrium equations and the interface compatibility constraints. The distribution functions for the Lagrange multipliers and shape functions for the finite elements should be properly selected to fulfill the Ladyzhenskaya–Babuška–Brezzi (LBB) condition (also known as the *inf-sup* condition) [2] in order to guarantee that both discretizations converge to the right solution upon mesh refinement. Other techniques to enforce the constraints are related with the introduction of a penalty term which associates a high cost to the violation of the compatibility constraint. This is the case of penalty methods which have the advantage of not incorporating extra DOFs to the system of equations although the penalty term can influence the solution. Methods based on Augmented Lagrange multipliers seek for an optimal compromise between penalty and Lagrange multipliers allowing an exact enforcement in combination with a penalty-like regularization which improves the numerical treatment. In such methods the constraint violation is also penalized but, quite in contrast, the solution is not influenced by the penalty term. In fact, the convexity of the functional is increased to facilitate the search of its minimum. Explicit elimination of the constraints can be performed as well leading to a system of equations with no extra unknowns. However, these methods are not straightforward to implement for the case of non-matching meshes since they require to compute a null space of the compatibility matrices used in the equivalent dual formulation or constructing a projection operator which can be demanding in terms of storage. The reader is referred to the work of Rixen [45] for an overview of such techniques in the context of domain decomposition methods. Yet another method to enforce the constraints without the introduction of Lagrange multipliers was introduced by Nitsche [37] which can be regarded as an intermediate technique between the Lagrange multiplier and the penalty method. It essentially modifies the weak form by adding a term including a positive constant parameter that enforces the Dirichlet boundary condition. Such modification depends on the particular problem but it does not lead to an ill-conditioned system as the penalty method would do for large values of the penalty parameter. In fact the stabilization term exhibits less influence on the solution than standard penalty methods and, in practice, large values are not needed in order to ensure convergence and a proper enforcement of the constraints. A Nitsche method to handle the interface constraints derived from domain decomposition methods was introduced in [4]. In order to avoid integrating products of functions on unrelated meshes, the Lagrange multiplier method can be adopted to enforce the interface constraints and a Nitsche method can be employed to stabilize the system. This is accomplished by introducing an extra term in the variational principle that couples the multipliers with the stress fields at the interface [23]. Although the system of equations is augmented due to the introduction of the Lagrange unknowns, no constraints are needed for the discretization of the hybrid solution field since stabilization is accounted for by the extra “penalty” term. This technique has proven to be specially useful for the constraint enforcement in contact domain methods [21, 38] and will be utilized in the present contribution.

Domain decomposition frameworks typically found in literature that account for non-conforming interfaces are based on the introduction of Lagrange multipliers to weakly enforce the compatibility constraints. This is the case for the mortar approach [5] which is currently the most general and well established methodology. It essentially consists on a segment-to-segment discretization strategy where one of the domain surfaces at the interface is considered the ‘mortar’ (master) surface whilst the other is the ‘non-mortar’ (or slave). There are also variants of the approach where a third intermediate surface is considered with a reference displacement, however it obviously leads to an increase of the number of DOFs. Constraints are, therefore, weakly enforced by minimizing the gap with respect to the mortar surface. This methodology presents an obvious disadvantage when the selected mortar discretization is significantly coarser than the non-mortar one since a higher error can be obtained at the interface and might not satisfy the patch test. The dual domain decomposition method proposed by Herry et al. [24] presents a highly accurate technique to glue non-conforming interfaces by means of Lagrange Multipliers. It basically uses a third interface discretization with optimal location of the DOFs such that the kinematic continuity at the interface is exact. However, the technique is only valid for geometrically compatible non-conforming interfaces and, for this reason, arbitrary curved interfaces and interfaces which do not share the limit nodes can not be considered therein. The localized Lagrange multipliers (LLM) method proposed by Park et al. [39] consists in the introduction of a third interface surface with a specific discretization in order to collocate Lagrange multipliers to enforce the constraints at the non-matching meshes. Such discretization is performed in order to *a priori* satisfy the constant stress patch condition. The technique can be viewed as a general and optimal node-to-segment approach applied to the connection frame but still arbitrary highly irregular grids and geometrically incompatible interfaces are not addressed.

In contrast with the above mentioned techniques we propose a general and flexible methodology to account for the most complex interface situations, i.e. geometrically incompatible and arbitrary non-conforming. The main idea in the domain interface method (DIM) is to explicitly discretize the interface through a Delaunay triangulation. In all previously introduced techniques one slave node or segment belonging to a domain interface was somehow projected on the other domain interface or on an auxiliary one. Therefore, the interface constraints were formulated in a domain which is one dimension lower than the subdivided domains. In the DIM, the interface constraints are formulated on an intermediate interface of the same dimension as the adjacent decomposed domains (cf. Fig. 2). Consequently, the interface surface is continuous and uniquely defined upon the Delaunay discretization without any assumptions on the master/slave side. This results in full and non-overlapping connections leading to satisfactory results concerning the constant stress patch test. The geometrical details, weak form and FE implementation are given in Sect. 2 and a number of representative simulations are commented in Sect. 3 in order to highlight the advantages and applicability of the proposed approach against other established methodologies.

## 2 Formulation of the DIM method

In this section the necessary geometrical aspects of the methodology are introduced and strong and weak forms of the problem are outlined. The solvability of the resulting system is discussed in terms of its stabilization and possible resolution choices including a parallel scheme. The main idea behind the DIM method concerns an explicit meshing of the interface between domains and is inspired in the methodology introduced in [21, 38] for contact mechanics. Rather than a particularization of the contact domain method for the case of tied contact the DIM equations stem from exporting the concept of the interface domain and the generality of the contact interface connections to the family of domain decomposition methods. In this manner, a new set of techniques within the domain decomposition methods is devised such as a non-intrusive methodology to handle rigid body modes (RBMs) without the need for extending the solution field to the RBM intensities as it is frequently done in established methodologies [11]. For the sake of completeness the methodology is introduced considering a finite strain case. Infinitesimal deformations are recovered by considering the necessary simplifications in the presented theory (i.e. small displacements compared to the domain dimensions and negligible gradients of such displacements). Compact notation will be utilized for tensor quantities throughout the document unless a different notation is specifically mentioned.

### 2.1 Geometrical description of the DIM

Consider the structure assembly depicted in Fig. 3 (top) where the domain \(\Omega \) is composed by the union of \(N_{\text {s}}\) non-overlapping domains \(\Omega ^{(s)}\). At each domain \(\Omega ^{(s)}\) one can identify the regions were Dirichlet \(\Gamma _{\text {u}}^{(s)}\) and Neumann \(\Gamma _{\sigma }^{(s)}\) boundary conditions are imposed. The interface \(\Gamma _{\text {I}}^{(s)}=\partial \Omega ^{(s)} \cap \partial \Omega ^{(q)}\) with outward unit normal \({\pmb {\nu }}^{(s)}\) where \(\partial \Omega \) stands for the domain boundaries of the adjacent domains *s* and *q*. Discretizations of the two bodies, e.g. using finite elements (FE), leads to a number of \(N_{\lambda }^{(s)}\) vertices at the domain boundary \(\partial \Omega ^{(s)}\) located at the vicinity of \(\Gamma _{\text {I}}^{(s)}\) which need to be involved in the interface discretization.

*k*. The result will be independent on the chosen magnitude

*k*but in our analyses \(k\approx h_{\text {e}}\), being \(h_{\text {e}}\) an average of the equivalent FE size. The fictitious coordinates \({\mathbf {{x}}}^{\prime }_i\) are utilized to generate a Delaunay triangulation which defines the interface domain

### *Remark 2.1*

*D*using triangular linear elements \(D^{(p)}\), the integrals over a geometrically incompatible interface when \(h\rightarrow 0\) converge to a bounded value:

*D*at time \(t_n\). Therefore, \(D_{n+1}=\phi ^{\text {D}}(D_{n})\) and \(\gamma _{\text {D}}=\phi ^{\text {D}}(\Gamma _{\text {D}})\) which represent the current and previous domain interface and interface surfaces, respectively (cf. Fig. 4). The incremental displacement field at the interface domain \({\mathbf {{u}}}^{\text {D}}\) is calculated by linearly interpolating the displacement increments \(d_i^{\text {D}}\) of the interface element vertices as

*n*.

### *Remark 2.2*

It is important to note that \({\mathbf {{f}}}^{\text {D}}({\mathbf {{x}}}_{n})\equiv {\mathbf {{f}}}^{(p)}=constant, \quad \forall {\mathbf {{x}}}_{n}\in D^{(p)}\) due to the linear character of the incremental displacements defined in (8).

*n*for a given point \({\mathbf{x}}_n\) and its normal projection to the base-line \(\bar{\mathbf{x}}_n\) as

*p*) and not only defined at the interface nodes as other methodologies would consider. For future use into the variational statement it is convenient to express the gap in terms of the displacement field and, to this end, a Taylor series expansion of \(\phi ^{\text {D}}({\mathbf {{x}}}_n)\) is considered around \(\bar{\mathbf {{x}}}_n\) up to second order terms, with \({\mathbf {{x}}}_n-\bar{\mathbf {{x}}}_n=g_{\text {N}}^{0}({\mathbf{x}}_n){\mathbf{N}}^{(p)}\). In this spirit

### *Remark 2.3*

The expressions of the gap match the ones obtained in node-to-segment techniques [51] when \({\mathbf {{x}}}_n\) is considered the slave node and \(\bar{\mathbf {{x}}}_n\) is chosen as the master one. However, as already pointed out by Oliver et al. [38], the expressions in (16) can be regarded more general since they are defined continuously throughout the interface patch \(D^{(p)}\) and not only for the vertices. Therefore the strategy is comparable to segment-to-segment techniques without the need for a definition of the master and slave surfaces.

### *Remark 2.4*

It is important to note that the gap intensity results singular when a perfect connection is fulfilled at the previous configuration, i.e. \({g}_{\text {T}}^{0}(\mathbf {{x}}_n)=0\). However, the integral terms added to the variational statement that account for the work at the interface converge to a bounded value despite the kernel being unbounded as shown in (2).

*n*and the normal vector \({\mathbf {{N}}}^{(p)}={\pmb {\nu }}_{n}^{(s)}\) as

### *Remark 2.5*

For the case of tied contact within the present domain decompositon framework there is no need for a splitting between normal and tangential contributions. However, this splitting is introduced for the sake of completeness presenting a methodology that serves as a basis to tackle, if needed, more complex phenomena in which a different treatment can be considered for the tangential and normal interface components. Such scenarios may involve, for instance, sliding between domains. Additionally, the imposition of particular boundary conditions utilizing the Lagrange multiplier framework can be considered as well in which the tangential component is treated differently due to friction or sliding. Specific cases involving fluid structure interaction could in this manner be treated as well. The case of normal tying along an interface segment between fully tied limit corners has been studied already within a multiscale Domain Decomposition framework [30] to avoid undesirable stress concentrations at heterogeneous non-conforming interfaces. Such a splitting induces a non-linearity which would indeed have an impact within a small deformation setting but for the large strain formulation presented in this manuscript the price of the splitting is considered low compared to the benefits of increasing the applicability of the methodology.

### 2.2 Strong and weak forms of the problem

*n*, \({\mathbf{P}}^{(s)}\) the first Piola–Kirchoff corresponding to the previous configuration \(\Omega _{n}^{(s)}\), \({\hat{\mathbf{t}}^{(s)}}\) and \({\hat{\mathbf{u}}^{(s)}}\) the prescribed tractions and displacements at \(\Gamma _{\sigma }^{(s)}\) and \(\Gamma _{u}^{(s)}\), respectively.

### *Remark 2.6*

Note that the compatibility constraints are enforced to nullify the normal \({\bar{g}}_{\text {N}}(\mathbf {{u}}^{\text {D}})\) and tangential \({\bar{g}}_{\text {T}}(\mathbf {{u}}^{\text {D}})\) components of the effective gap satisfying, in this manner, displacement compatibility across domains as indicated in (28). The gaps are defined by means of the incremental displacements at the interface \(\mathbf {{u}}^{\text {D}}\) which are calculated by interpolating the displacements at the vertices according to (8). In addition, the displacements at the vertices of the interface patch are calculated with the domain displacements \(\mathbf {{u}}^{(s)}({\mathbf {{x}}}_n)\).

### *Remark 2.7*

The equilibrium equation in (23) and imposed tractions at the boundary (26) correspond to the Euler–Lagrange equations and natural boundary conditions associated to the virtual work principle in (33). In the same spirit, the constraint equations in (28) correspond to the Euler–Lagrange equations associated to the constraint variational Eqs. (34, 35).

### 2.3 Discretization using FE and lambda-solvability of the resulting system

*a*denotes the discrete nodes corresponding to the displacement interpolation. In a similar fashion, the displacements \({\mathbf {{u}}}^{\text {D}}\), Lagrange multipliers \({\pmb {\lambda }}_{\text {I}}\) and its corresponding variations at the interface patch

*D*are discretized as:

*b*stands for the the discrete nodes corresponding to the Lagrange multipliers interpolation using the shape functions \({\Psi }\) which read:

### *Remark 2.8*

It should be noted that a piece-wise constant interpolation of the Lagrange multipliers might not lead to optimal spatial convergence rates. We have not observed any critical convergence behaviour in any of our simulations. However, it is observed in our analyses that the theoretical convergence rates might not be fully recovered due to use of piece-wise constant Lagrange multipliers. A more theoretical and practical study regarding convergence rates with respect to the choice of the Lagrange multiplier space is out of the scope of this work and could be considered as a future research topic for a more mathematically oriented contribution.

*L*stands for the base-side length of the interface domain element in the previous configuration (cf. Fig. 4) and \(\alpha _{\text {stab}}\) corresponds to a dimensionless user defined parameter which is regarded independent of the mesh size [21]. Note that the units of the stabilization parameter \(\tau \) are \([L]^3/[F]\) such that the additional variational term corresponds to an energetic contribution with units [

*F*][

*L*]. It should be noted that, since the penalized term is part of the Euler–Lagrange equations of the variational principle (27), it will tend to zero upon mesh refinement. For this reason the stabilization procedure described in (63, 64) is qualified as a consistent penalty method in which, unlike other non-consistent penalty methods, the parameter \(\tau \) can be made significantly small without affecting the quality of the obtained results.

### *Remark 2.9*

The system in (69) is non-symmetric due to the fact that the stabilization term is only introduced in the constraint equations. The consistent symmetric version proposed by Heintz and Hansbo [23] could be utilized too and it would be recommended in those cases where the adopted FE model leads to a symmetric tangent stiffness matrix. In the context of a full parallel scheme, the symmetry of the system in (69) allows the use of efficient iterative solvers such as the preconditioned conjugate gradient [3].

### 2.4 Parallel system resolution strategies

If the flexibility matrix \({\mathbf {{F}}}_{\text {I}}\) is symmetric, the interface problem in (77) can be solved by preconditioned Conjugate Gradient iterations, otherwise a Bi-Conjugate Gradient Stabilized (Bi-CGSTAB) or a Generalized Minimal Residual method (GMRES) can be employed [3]. In our case, the non-symmetry of the flexibility problem (77) can be caused by the stabilization procedure outlined in (63, 64), which only affects the constraint variational principle, or in those cases where the constitutive equations render a non-symmetric stiffness matrix. For the case of ill-conditioned systems, e.g. domains with high stiffness contrasts due to heterogeneous components or undergoing damage growth and coalescence, robust and efficient preconditioners are generally hard to find (cf. [29, 47, 49]).

The main goal of this contribution is not the parallel assessment of the proposed domain decomposition technique but rather the introduction of the novel concepts for handling non-conforming meshes and its performance in general assembly situations. However, it is highlighted that the algorithm is perfectly compatible with a full parallel scheme as the ones explained above. For clarity, in all the examples presented in Sect. 3 the flexibility problem in (77) was explicitly assembled and solved through standard direct solvers using an LU factorization.

### 2.5 A non-intrusive strategy to handle rigid body modes in the DIM

The local problems in (81) are solved using direct solvers while an iterative solver is employed for the augmented interface problem (86) which is transformed into a semi-definite system of equations on \(\Delta {\pmb {\Lambda }}\), i.e. eliminating the RBMs from the system, by imposing \({\mathbf {{G}}}_{\text {I}}^{\text {T}}\Delta {\pmb {\Lambda }}={\mathbf {{e}}}\) through a projection operator (cf. [12, 45] for a more detailed explanation).

*c*denoting the penalty coefficient utilized to enforce the new condition. The additional term (93) can be expressed in terms of the virtual displacements as

### *Remark 2.10*

Equations (113, 114) are obtained assuming linear (constant strain) interface patches as adopted in the examples presented in this work. If a higher order interpolation is employed at the interface patches, a mean value of the normal and tangential traction \(t_{\text {N}}\) and \(t_{\text {T}}\) can be used along \(\Gamma _{\text {D}}^{(p)}\) such that the relations (113, 114) can still be utilized.

### *Remark 2.11*

In all our computations we considered all interface patches \(D^{(r)}\) adjacent to domain \(\Omega ^{(m)}\). If the base-line of the patch \(D^{(r)}\) chosen to avoid the RBM is not located at \(\partial \Omega ^{(m)}\), contributions to adjacent domains would be expected in (70) and, therefore, the system could not be properly processed in a parallel fashion.

### 2.6 Iterative scheme for the non-linear DIM

The linearized set of equations in (70) obtained with a Newton-like scheme is solved iteratively for each load/time step \(\Delta t\) as done in the so-called Newton-Krylov-Schur methods [6, 12]. In this view, a first type of iterations refer to the solution of the non-linear problem with successive linear approximations. A second type of iterations arise from the solution of the flexibility problem in (77) where usually Conjugate Gradient or GMRES iterates are considered. Finally, the Schur complements are utilized for the local solutions at each domain \(\Omega ^{(s)}\) (80).

## 3 Framework validation through representative simulations

In the following we present a number of academic examples which highlight the accuracy and convergence properties of the framework. Attention is focused on the continuity of the solution at the interface, the convergence rate upon mesh refinement and a qualitative comparison of the advantages and eventual pitfalls against existing formulations. Infinitesimal strain theory is utilized in all examples except from the one reported in Sect. 3.5 where finite strain theory is considered. Additionally, the stabilization parameter \(\tau \) (cf. Eqs. 63–68) is set to \(10^{-7}\) in all our computations except from the results reported by the end of Sect. 3.4 where a sensitivity analysis is performed varying the values of \(\tau \).

### 3.1 Patch test

The so-called ‘patch test’ is specially selected to verify the correct transference of information throughout the interface. A compression analysis is performed on a two-dimensional homogeneous quadrilateral specimen. Plane strain conditions are assumed and a linear elastic constitutive law is considered with Young’s modulus \(E=2.1 \times 10^{2}\) MPa and Poisson’s ratio \(\nu =0.3\). The geometry, boundary conditions and domain discretizations are depicted in Fig. 7.

Displacement contours for both discretizations are shown in Fig. 8 within the corresponding deformed configurations. Contour lines show that continuity is satisfied throughout the whole specimen at this observation scale. The horizontal and vertical stress fields are constant and identical for both cases according to the machine precision and therefore not reported. However the relative error \(e_\text {r}\) between the numerical and reference stresses is \(1.2\times 10^{-8}\) for the horizontal stress and \(1.1\times 10^{-8}\) for the vertical stress.

Since the maximum relative error is order \(10^{-8}\), it is concluded that the proposed methodology passes the patch test and provides an adequate transference of information across a non-conforming interface.

### 3.2 Patch test with floating subdomains

*c*used to overcome the appearance of RBMs (cf. 93–98) is set to \(10^{-4}\) in all examples that require the treatment of floating domains.

Both relative errors are significantly small and, therefore, it is concluded that the patch test is successfully passed in those cases where floating subdomains are present. This indicates that the proposed non-intrusive methodology to handle floating domains does not affect the accuracy of the domain decomposition framework.

### 3.3 Cantilever beam test

The reference solution for the right end deflection is obtained through simulations with a monolithic approach, i.e. considering a single discretization for the whole specimen and employing a standard FE approach. A mesh sensitivity analysis is performed (cf. Fig. 12) and the reference deflection \(u^{\text {ref}}_{\text {y}}=-3.478\times 10^{-2}\) m is selected which corresponds to a mesh discretization similar to the one chosen in the domain decomposition (DD) approach.

The vertical displacement contours for both straight and curved interfaces are shown in Fig. 13. Note that displacement continuity is fulfilled across the straight and curved interfaces.

Cantilever beam test. Accuracy of the proposed approach compared to the reference numerical solution

Straight interface | Curved interface | |
---|---|---|

Number of \(\Lambda \) | 36 | 36 |

\(\displaystyle \dfrac{u_{\text {y}}}{u^{\text {ref}}_{\text {y}}}\) | 99.00 % | 98.33 % |

Cantilever beam test. Accuracy of similar approaches compared to a reference theoretical solution

Coarse mortar mesh | Fine mortar mesh | Dual DD method | |
---|---|---|---|

Number of \(\Lambda \) | 15 | 23 | 35 |

\(\displaystyle \dfrac{u_{\text {y}}}{u^{\text {ref}}_{\text {y}}}\) | 79.06 % | 99.86 % | 99.84 % |

A similar example was carried out by Herry et al. [24] where the cantilever beam was analyzed using bilinear quadrilateral FE. Table 2 is reproduced from this study and compares the performance between mortar methods and a dual domain decomposition approach developed for non-matching meshes. It is worth noting that the accuracy of the deflection provided in the present contribution is very much comparable to the accuracies of the above mentioned approaches. It is noticeable in Table 2 that the original mortar method performs poorly when the coarse mesh is used to define the mortar surface. This is obviously regarded as a considerable drawback of the method since in a general case the distinction between coarse and fine discretizations at the interface might not be straightforward. The approach based on dual domain decomposition methods does not suffer from this shortcoming since a third surface is constructed with a particular arrangement of Lagrange multipliers that leads to an optimum matching condition regarding the kinematic continuity. However, this approach is based on the assumption that the three surfaces have the same geometry. This implies that, upon discretization, a number of nodes, e.g. extreme nodes, are common. This assumption is reasonable for cases in which the domains are originated from the decomposition of a continuous body. Conversely, if the domains are glued at a common surface and discretized independently, this condition might not be realistic.

More advanced mortar methods employed nowadays [40, 41, 42, 50] consider a carefully chosen Lagrange multiplier space based on stability and optimality considerations or even a third auxiliary surface with an optimal node collocation to minimize the error at the interface integrals. This may not lead to the suboptimal performance shown in Table 2 but, for the case of a third auxiliary surface, extra degrees of freedom (possibly condensible) are required and, therefore, the cost and complexity of the formulation is increased. For the case of an heterogeneous interface, the performance in terms of accuracy of such advanced mortar methods would not be affected. However, the DIM method outlined in this contribution for two-dimensional applications distinguishes from these techniques in the sense that no extra projection surfaces and extra DOFs are required for a general geometrically incompatible interface since this is automatically taken into account by the interface mesh. It is not the author’s intention to provide a review study of all recent mortar variants but rather compare the accuracy of the DIM method with situations in which original mortar methods would perform optimally. We believe that more advanced mortar technologies would provide a comparable accuracy to the one observed by original mortar techniques when the finest mesh is selected as the mortar surface. In such scenarios, the DIM method would be considered a computationally cheaper alternative.

### 3.4 Convergence analysis and dependence on the stabilization parameter \(\tau \)

The beam is split into ten domains as indicated in Fig. 14 and an alternate regular discretization is considered between domains such that all interfaces are non-conforming. Four levels of refinement are employed which correspond to element sizes ranging from 1 / 4 to 1 / 32 m for one set of domains and 1 / 6 to 1 / 48 m for the alternate set of domains.

*m*is about 1.27 between the first two discretizations (cf. Fig. 15). This is in accordance with the results presented by Girault et al. [18] where a different error is computed which takes into account the prescribed boundary and Lagrange multipliers at the interface.

Deflections at point P using the monolithic and domain decomposition approaches

Monolithic | Domain decomposition | Relative error | |
---|---|---|---|

\(\left| \left| {\mathbf {{u}}}_{\text {M}}\right| \right| \) [m] | \(\left| \left| {\mathbf {{u}}}_{\text {DDM}}\right| \right| \) [m] | \(e_{\text {r}}=\dfrac{\left| \left| {\mathbf {{u}}}_{\text {M}}\right| \right| -\left| \left| {\mathbf {{u}}}_{\text {DDM}}\right| \right| }{\left| \left| {\mathbf {{u}}}_{\text {DDM}}\right| \right| }\) | |

Point P | 1.195 | 1.251 | \(4.676\,\%\) |

### 3.5 Geometrically incompatible non-matching meshes

In order to assess the performance of the DIM in this example, the vertical displacement of the wing end (point P) is monitored and compared to the one obtained with a reference monolithic approach considering a similar spatial discretization. As it can be observed in the contour plots in Fig. 20 and the deflections reported in Table 3, the results for the geometrically incompatible non-conforming interface are remarkably close to the ones obtained with a monolithic approach used as the reference solution for this problem. Note that the interface gap depicted in the close-up in Fig. 20 is hardly visible and it can be concluded that the methodology shows satisfactory results and proves to be remarkably competent against the most demanding cases.

## 4 Conclusions and future perspectives

The kernel of the proposed DIM to be used in domain decomposition methods and presented in this contribution resides in an explicit discretization of the interface by means of a zero-thickness Delaunay triangulation. This is accomplished through a fictitious contraction of the subdomains at the interface which allows for a proper discretization between the shrunk domain boundaries. The fictitious contraction has no impact on the solution of the problem since all calculations are performed using the original coordinates. Moreover, it is shown that the integrals over the zero-thickness interface are bounded despite the fact that the integrand is not bounded. The method is grounded in the so called Nitsche methods in which a stabilization term is added at the constraint equations. In this manner, zero diagonal terms are not present at the global system and instabilities are avoided if the LBB condition is not fulfilled by the chosen discretization. This process is viewed as a consistent penalty method since the stabilization term vanishes with progressive mesh refinement and the penalty factor can be made significantly small without affecting the results.

The methodology is inherited from the field of contact mechanics and, for this reason, is regarded as more general than existing domain decomposition strategies since there is no need for a fixed interface geometry that needs to be shared by the decomposed domains at both sides of the interface. This is for instance the case of some dual domain decomposition techniques in which the limit DOFs at the interface need to be common in both adjacent domain discretizations. Moreover, the generation of interface patches is independent of the choice of slave and master sides in contrast with early mortar methods and, for this reason, the methodology is regarded less prone to errors related with such choice. More evolved mortar methodologies are able to automatically handle these situations by considering, for instance, an extra interface surface from which a particular distribution of DOFs serves to construct an optimal set of interface constraints between adjacent meshes. However, such an intermediate surface involves calculations over extra DOFs which could increase the computational cost and complexity of the approach.

A new non-intrusive strategy to handle floating domains is outlined which adds an extra stabilization term to the energy functional with contributions of all adjacent interface patches. This avoids the calculation of a pseudo-inverse at floating domains and does not destroy the band structure of the global system. For this reason, it does not spoil a possible parallel solution as done with existing dual domain decomposition methods.

The DIM method passes the patch test also for the case of floating subdomains providing continuity of the stress field across the interface which indicates that all new ingredients do not affect the accuracy of the solution when compared to other established techniques. Remarkable continuity of the displacement field across the interface is shown in all reported experiments. A comparable accuracy degree has been observed with independence of the shape of the interfaces. In addition, good convergence rates are reached upon mesh refinement similar to other accurate techniques for non-conforming interfaces although theoretical convergence rates could not be obtained exactly due to the piece-wise constant interpolation of the Lagrange multipliers.

The algorithmic treatment of the subdomains allows for a parallel solution scheme analogous to well established techniques such as dual domain decomposition techniques. In this manner the local factorizations can be tackled by direct solvers and the resulting interface problem can be assembled in a matrix-free fashion and solved with the use of iterative solvers. A full parallel version of the framework involves the construction of adequate preconditioners for the interface problem which was out of the scope of this contribution and it is left as a future research line. In the same spirit, a 3D extension of the method is left as a topic for further research. Preliminary 3D works have been successfully performed for the contact domain case [22] and it is presumed that the technology shows sufficient potential to successfully perform in large 3D cases although challenges are expected concerning a robust 3D Delaunay tetrahedralization for the most complex interface geometries. In any case, the domain decomposition formulation applied to monoscale analysis or ’static’ spatial discretizations has the advantage of performing the interface meshing once at the beginning of the analysis and, therefore, the cost of a new Delaunay tetrahedralization (which could be certainly more involved for a 3D case) is negligible compared to the cost of the whole analysis.

The framework presented in this contribution provides the basis to study complex deformation phenomena involving large strains, e.g. bulk metal forming, and shows a clear potential to tackle multifield applications, e.g. mixed formulations for incompressible and thermo-mechanical problems. In this view, the field of Lagrange Multipliers needs to be extended in order to account for the temperature and pressure fields and it is planned for a future contribution.

The explicit Delaunay triangulation of the interface is expected to positively impact the field of multiresolution problems and adaptive multiresolution analysis. Since arbitrary discretizations can be handled at non-conforming interfaces, adaptive multiresolution analyses can be performed employing an independent ‘on-the-fly’ refinement at particular domains of interest without meshing restrictions. In the same spirit, existing independent discretizations of particular domains can be easily reused and incorporated to the calculations.

Due to its versatility and generality the DIM method is viewed as an attractive alternative to mortar methods and other established dual domain decomposition methods in which the interface geometry and its limits are restricted to the boundary discretization of the connected domains.

## Notes

### Acknowledgments

The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement No. 320815 (ERC Advanced Grant Project “Advanced tools for computational design of engineering materials” COMP-DES-MAT). Oriol Lloberas-Valls gratefully acknowledges the funding received from the Spanish Ministry of Economy and Competitiveness through the “Juan de la Cierva” Postdoctoral Junior Grant: JCI-2012-13782 and the National Research Plan 2014: MAT2014-60919-R.

## References

- 1.Babuška I (1973) The finite element method with lagrangian multipliers. Numer Math 20(3):179–192MathSciNetCrossRefzbMATHGoogle Scholar
- 2.Babuška I, Narasimhan R (1997) The Babuška–Brezzi condition and the patch test: an example. Comput Methods Appl Mech Eng 140(12):183–199CrossRefzbMATHGoogle Scholar
- 3.Barrett R, Berry M, Chan TF, Demmel J, Donato JM, Dongarra J, Eijkhout V, Pozo R, Romine C, Van Der Vorst H (1993) Templates for the solution of linear systems: building blocks for iterative methods. SIAM Press, Philadelphia, PAzbMATHGoogle Scholar
- 4.Becker R, Hansbo P, Stenberg R (2003) A finite element method for domain decomposition with non-matching grids. ESAIM Math Model Numer Anal 37:209–225MathSciNetCrossRefzbMATHGoogle Scholar
- 5.Bernardi C, Maday Y, Patera A (1994) A new nonconforming approach to domain decomposition: the mortar element method. In: Brezis H, Lions JL (eds) Nonlinear partial differential equations and their application, volume XI of College de France Seminar. Pitman, London, pp 13–51Google Scholar
- 6.Cresta P, Allix O, Rey C, Guinard S (2007) Nonlinear localization strategies for domain decomposition methods: application to post-buckling analyses. Comput Methods Appl Mech Eng 196(8):1436–1446MathSciNetCrossRefzbMATHGoogle Scholar
- 7.de Boer A, van Zuijlen AH, Bijl H (2007) Review of coupling methods for non-matching meshes. Comput Methods Appl Mech Eng 196(8):1515–1525MathSciNetCrossRefzbMATHGoogle Scholar
- 8.Duarte CA, Kim DJ (2008) Analysis and applications of a generalized finite element method with global-local enrichment functions. Comput Methods Appl Mech Eng 197(6–8):487–504MathSciNetCrossRefzbMATHGoogle Scholar
- 9.Everdij FPX, Lloberas-Valls O, Rixen DJ, Simone A, Sluys LJ (2013) Domain decomposition and parallel direct solvers as an adaptive multiscale strategy for damage simulation in materials. In: Proceedings of the 22nd international conference on domain decomposition methods (DD22), Lugano, SwitzerlandGoogle Scholar
- 10.Farhat C (1990) Which parallel finite element algorithm for which architecture and which problem? Eng Comput 7(3):186–195CrossRefGoogle Scholar
- 11.Farhat C, Roux FX (1991) A method of finite element tearing and interconnecting and its parallel solution algorithm. Int J Numer Methods Eng 32(6):1205–1227CrossRefzbMATHGoogle Scholar
- 12.Farhat C, Pierson K, Lesoinne M (2000) The second generation FETI methods and their application to the parallel solution of large-scale linear and geometrically non-linear structural analysis problems. Comput Methods Appl Mech Eng 184(2–4):333–374CrossRefzbMATHGoogle Scholar
- 13.Feyel F, Chaboche JL (2000) \({\rm FE}^{2}\) multiscale approach for modelling the elastoviscoplastic behaviour of long fibre SiC/Ti composite materials. Comput Methods Appl Mech Eng 183(3–4):309–330CrossRefzbMATHGoogle Scholar
- 14.Flemisch B, Kaltenbacher M, Triebenbacher S, Wohlmuth B (2011) Non-matching grids for a flexible discretization in computational acoustics. Commun Comput Phys 11(2):472–488MathSciNetGoogle Scholar
- 15.Gendre L, Allix O, Gosselet P (2011) A two-scale approximation of the schur complement and its use for nonintrusive coupling. Int J Numer Methods Eng. doi: 10.1002/nme.3142
- 16.Géradin M, Rixen DJ (2015) Mechanical vibrations: theory and applications, 3rd edn. Willey. ISBN 978-1-118-90020-8Google Scholar
- 17.Ghosh S, Bai J, Raghavan P (2007) Concurrent multi-level model for damage evolution in microstructurally debonding composites. Mech Mater 39(3):241–266CrossRefGoogle Scholar
- 18.Girault V, Pencheva GV, Wheeler MF, Wildey TM (2009) Domain decomposition for linear elasticity with dg jumps and mortars. Computer Methods in Applied Mechanics and Engineering 198(2126):1751–1765MathSciNetCrossRefzbMATHGoogle Scholar
- 19.Gosselet P, Rey C (2006) Non-overlapping domain decomposition methods in structural mechanics. Arch Comput Methods Eng 13(4):515–572MathSciNetCrossRefzbMATHGoogle Scholar
- 20.Guidault PA, Allix O, Champaney L, Navarro JP (2007) A two-scale approach with homogenization for the computation of cracked structures. Comput Struct 85(17–18):1360–1371MathSciNetCrossRefGoogle Scholar
- 21.Hartmann S, Oliver J, Weyler R, Cante JC, Hernández JA (2009) A contact domain method for large deformation frictional contact problems. Part 2: numerical aspects. Comput Methods Appl Mech Eng 198(3336):2607–2631CrossRefzbMATHGoogle Scholar
- 22.Hartmann S, Weyler R, Oliver J, Cante JC, Hernández JA (2010) A 3d frictionless contact domain method for large deformation problems. Comput Model Eng Sci 55(3):211–269zbMATHGoogle Scholar
- 23.Heintz P, Hansbo P (2006) Stabilized lagrange multiplier methods for bilateral elastic contact with friction. Comput Methods Appl Mech Eng 195(3336):4323–4333MathSciNetCrossRefzbMATHGoogle Scholar
- 24.Herry B, Di Valentin L, Combescure A (2002) An approach to the connection between subdomains with non-matching meshes for transient mechanical analysis. Int J Numer Methods Eng 55(8):973–1003CrossRefzbMATHGoogle Scholar
- 25.Hughes TJR, Feijóo GR, Mazzei L, Quincy JB (1998) The variational multiscale method—a paradigm for computational mechanics. Comput Methods Appl Mech Eng 166(1–2):3–24MathSciNetCrossRefzbMATHGoogle Scholar
- 26.Hund A, Ramm E (2007) Locality constraints within multiscale model for non-linear material behaviour. Int J Numer Methods Eng 70(13):1613–1632MathSciNetCrossRefzbMATHGoogle Scholar
- 27.Kouznetsova V, Brekelmans WAM, Baaijens FPT (2001) An approach to micro-macro modeling of heterogeneous materials. Comput Mech 27(1):37–48CrossRefzbMATHGoogle Scholar
- 28.Ladevèze P, Loiseau O, Dureisseix D (2001) A micro-macro and parallel computational strategy for highly heterogeneous structures. Int J Numer Methods Eng 52(12):121–138CrossRefGoogle Scholar
- 29.Lloberas-Valls O (2013) Multiscale domain decomposition analysis of quasi-brittle materials. Ph.D. Thesis, Delft University of TechnologyGoogle Scholar
- 30.Lloberas-Valls O, Rixen DJ, Simone A, Sluys LJ (2012) On micro-to-macro connections in domain decomposition multiscale methods. Comput Methods Appl Mech Eng 225–228:177–196MathSciNetCrossRefzbMATHGoogle Scholar
- 31.Lloberas-Valls O, Rixen DJ, Simone A, Sluys LJ (2012) Multiscale domain decomposition analysis of quasi-brittle heterogeneous materials. Int J Numer Methods Eng 89(11):1337–1366MathSciNetCrossRefzbMATHGoogle Scholar
- 32.Lloberas-Valls O, Everdij FPX, Rixen DJ, Simone A, Sluys LJ (2014) Multiscale analysis of damage using dual and primal domain decomposition techniques. In: Oñate E, Oliver X, Huerta A (eds) Proceedings of the 11th. World congress on computational mechanics—WCCM XI, Barcelona, SpainGoogle Scholar
- 33.Loehnert S, Prange C, Wriggers P (2012) Error controlled adaptive multiscale XFEM simulation of cracks. Int J Fract 178(1):147–156Google Scholar
- 34.MacPherson I, Rodgers J, Allen C, Fenwick C (2006) Sliding and non-matching grid methods for helicopter simulations. In: 44th AIAA aerospace sciences meeting and exhibit. American Institute of Aeronautics and AstronauticsGoogle Scholar
- 35.McGee W, Seshaiyer P (2005) Non-conforming finite element methods for nonmatching grids in three dimensions. In: Domain decomposition methods in science and engineering, number 40 in lecture notes in computational science and engineering. Springer, Berlin, pp 327–334Google Scholar
- 36.Mergheim J (2009) A variational multiscale method to model crack propagation at finite strains. Int J Numer Methods Eng 80(3):269–289MathSciNetCrossRefzbMATHGoogle Scholar
- 37.Nitsche J (1971) Über ein variationsprinzip zur lösung von dirichlet-problemen bei verwendung von teilräumen, die keinen randbedingungen unterworfen sind. Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg 36(1):9–15MathSciNetCrossRefzbMATHGoogle Scholar
- 38.Oliver J, Hartmann S, Cante JC, Weyler R, Hernández JA (2009) A contact domain method for large deformation frictional contact problems. Part 1: theoretical basis. Comput Methods Appl Mech Eng 198(3336):2591–2606CrossRefzbMATHGoogle Scholar
- 39.Park KC, Felippa CA, Rebel G (2002) A simple algorithm for localized construction of non-matching structural interfaces. Int J Numer Methods Eng 53(9):2117–2142MathSciNetCrossRefzbMATHGoogle Scholar
- 40.Popp A, Wohlmuth BI, Gee MW, Wall WA (2012) Dual quadratic mortar finite element methods for 3d finite deformation contact. SIAM J Sci Comput 34(4):B421–B446MathSciNetCrossRefzbMATHGoogle Scholar
- 41.Popp A, Gee MW, Wall WA (2013) Mortar methods for single- and multi-field applications in computational mechanics. In: Resch M, Wang X, Bez W, Focht E, Kobayashi H (eds) Sustained simulation performance 2012. Springer, Berlin, pp 133–154CrossRefGoogle Scholar
- 42.Puso MA (2004) A 3d mortar method for solid mechanics. Int J Numer Methods Eng 59(3):315–336CrossRefzbMATHGoogle Scholar
- 43.Puso MA, Laursen TA (2004) A mortar segment-to-segment contact method for large deformation solid mechanics. Comput Methods Appl Mech Eng 193(68):601–629MathSciNetCrossRefzbMATHGoogle Scholar
- 44.Quiroz L, Beckers P (1995) Non-conforming mesh gluing in the finite elements method. Int J Numer Methods Eng 38(13):2165–2184Google Scholar
- 45.Rixen DJ (1997) Substructuring and dual methods in structural analysis. PhD thesis, Publications de la Faculté des Sciences Appliquées, no. 175, Université de Liège, BelgiumGoogle Scholar
- 46.Rixen DJ (2001) Parallel processing. In: Encyclopedia of vibration. Elsevier, Oxford, pp 990–1001Google Scholar
- 47.Rixen DJ, Farhat C (1999) A simple and efficient extension of a class of substructure based preconditioners to heterogeneous structural mechanics problems. Int J Numer Methods Eng 44(4):489–516MathSciNetCrossRefzbMATHGoogle Scholar
- 48.Simo JC, Hughes TJ (1998) Computational inelasticity, volume 7 of interdisciplinary applied mathematics. Springer, New YorkGoogle Scholar
- 49.Spillane N, Rixen DJ (2013) Automatic spectral coarse spaces for robust FETI and BDD algorithms. Int J Numer Methods Eng 95(11):953–990MathSciNetCrossRefGoogle Scholar
- 50.Wohlmuth BI (2000) A mortar finite element method using dual spaces for the lagrange multiplier. SIAM J Numer Anal 38(3):989–1012MathSciNetCrossRefzbMATHGoogle Scholar
- 51.Wriggers P (1995) Finite element algorithms for contact problems. Arch Comput Methods Eng 2(4):1–49MathSciNetCrossRefGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (
http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.