1 Introduction

Considerable interest has recently emerged with the enforcement of essential boundary conditions [1] and contact detection algorithms [2] using approximate distance functions (ADFs). Realistic contact frameworks for finite-strain problems make use of three classes of algorithms that are often coupled (see [3]):

  1. 1.

    Techniques of unilateral constraint enforcement. Examples of these are direct elimination (also reduced-gradient algorithms), penalty and barrier methods, and Lagrange multiplier methods with the complementarity condition [4]. Stresses from the underlying discretization are often used to assist the normal condition with Nitsche’s method [5].

  2. 2.

    Frictional effects (and more generally, constitutive contact laws) and cone-complementarity forms [6, 7]. Solution paradigms are augmented Lagrangian methods for friction [8], cone-projection techniques [9] and the so-called yield-limited algorithm for friction [10].

  3. 3.

    Discretization methods. Of concern are distance calculations, estimation of tangent velocities and general discretization arrangements. In contemporary use are (see [3]) node-to-edge/face, edge/face-to-/edge/face and mortar discretizations.

In terms of contact enforcement and friction algorithms, finite displacement contact problems are typically addressed with well-established contact algorithms, often derived from solutions developed by groups working on variational inequalities and nonsmooth analysis. However, in the area of contact discretization and the related area of contact kinematics [11], there are still challenges to be addressed in terms of ensuring the complete robustness of implicit codes. One of the earliest papers on contact discretization was by Chan and Tuba [12], who considered contact with a plane and used symmetry to analyze cylinder-to-cylinder contact. Francavilla and Zienkiewicz [13] later proposed an extension to node-to-node contact in small strains, allowing for direct comparison with closed-form solutions. The extension to finite strains requires further development, and the logical development was the node-to-segment approach, as described in the work of Hallquist [14]. Although node-to-segment algorithms are known to entail many defficiencies, most of the drawbacks have been addressed. Jumps and discontinuities resulting from nodes sliding between edges can be removed by smoothing and regularization [15]. Satisfaction of patch-test, which is fundamental for convergence, can be enforced by careful penalty weighing [16, 17]. It is well known that single-pass versions of the node-to-segment method result in interference and double-pass can produce mesh interlocking, see [18, 19]. This shortcoming has eluded any attempts of a solution and has motivated the development of surface-to-surface algorithms. One of the first surface-to-surface algorithms was introduced by Simo, Wriggers, and Taylor [11]. Zavarise and Wriggers [20] presented a complete treatment of the 2D case and further developments were supported by parallel work in nonconforming meshes, see [21]. A review article on mortar methods for contact problems [22] where stabilization is discussed and an exhaustive detailing of most geometric possibilities of contact was presented by Farah, Wall and Popp [23]. That paper revealed the significant effort that is required to obtain a robust contact algorithm. An interesting alternative approach to contact discretization has been proposed by Wriggers, Schröder, and Schwarz [24], who use an intermediate mesh with a specialized anisotropic hyperelastic law to represent contact interactions. In the context of large, explicit codes, Kane et al. [25] introduced an interference function, or gap, based on volumes of overlapping, allowing non-smooth contact to be dealt in traditional smooth-based algorithms.

In addition to these general developments, there have been specialized algorithms for rods, beams, and other structures. Litewka et al. [26] as well as Neto et al. [27, 28], have produced efficient algorithms for beam contact. For large-scale analysis of beams, cables and ropes, Meier et al. [29] have shown how beam specialization can be very efficient when a large number of entities is involved.

Recently, interest has emerged on using approximate distance functions [2, 30,31,32] as alternatives to algorithms that heavily rely on computational geometry. These algorithms resolve the non-uniqueness of the projection, but still require geometric computations. In [30], Wolff and Bucher proposed a local construction to obtain distances inside any object, but still require geometric calculations, especially for the integration scheme along surfaces. Liu et al. [31] have combined finite elements with distance potential discrete element method (DEM) in 2D within an explicit integration framework. A geometric-based distance function is constructed and contact forces stem from this construction. In [32], the analysis of thin rods is performed using classical contact but closed-form contact detection is achieved by a signed-distance function defined on a voxel-type grid. In [2], a pre-established set of shapes is considered, and a function is defined for each particle in a DEM (discrete element method) setting with a projection that follows. In the context of computer graphics and computational geometry, Macklin et al. [33] introduced an element-wise local optimization algorithm to determine the closest-distance between the signed-distance-function isosurface and face elements. Although close to what is proposed here, no solution to a partial differential equation (PDE) is proposed and significant geometric calculations are still required.

In this paper, we adopt a different method, which aims to be more general and less geometric-based. This is possible due to the equivalence between the solution of the Eikonal equation and the distance function [34]. It is worth noting that very efficient linear algorithms are available to solve regularized Eikonal equations, as discussed by Crane et al. [35]. The work in [35] provides a viable solution for contact detection in computational mechanics. Applications extend beyond contact mechanics and they provide a solution for the long-standing issue of imposing essential boundary conditions in meshfree methods [1].

We solve a partial differential equation (PDE) that produces an ADF (approximate distance function) that converges to the distance function as a length parameter tends to zero. The relation between the screened Poisson equation (also identified as Helmholtz-like equation), which is adopted in computational damage and fracture [36, 37] and the Eikonal equation is well understood [38]. Regularizations of the Eikonal equation such as the geodesics-in-heat [35] and the screened Poisson equation are discussed by Belyaev and Fayolle [39]. We take advantage of the latter for computational contact mechanics. Specifically, the proposed algorithm solves well-known shortcomings of geometric-based contact enforcement:

  1. 1.

    Geometric calculations are reduced to the detection of a target element for each incident node.

  2. 2.

    The gap function \(g(\varvec{x})\) is continuous, since the solution of the screened Poisson equation is a continuous function.

  3. 3.

    The contact force direction is unique and obtained as the gradient of \(g(\varvec{x})\).

  4. 4.

    Since the Courant–Beltrami penalty function is adopted, contact force is continuous in the normal direction.

Concerning the importance of solving the uniqueness problem, Konyukhov and Schweizerhof [40] have shown that considerable computational geometry must be in place to ensure uniqueness and existence of node-to-line projection. Another important computational effort was presented by Farah et al. [23] to geometrically address all cases in 3D with mortar interpolation. Extensions to higher dimensions require even more intricate housekeeping. When compared with the geometrical approach to the distance function [2, 32], the algorithm is much simpler at the cost of a solution of an additional PDE. Distance functions can be generated by level set solutions of the transport equation [41].

The remainder of this paper is organized as follows. In Sect. 2, the approximate distance function is introduced as the solution of a regularization of the Eikonal equation. In Sect. 3, details concerning the discretization are introduced. The overall algorithm, including the important step-control, is presented in Sect. 4. Verification and validation examples are shown in Sect. 5 in both 2D and 3D. Finally, in Sect. 6, we present the advantages and shortcomings of the present algorithm, as well as suggestions for further improvements.

2 Approximate distance function (ADF)

Let \(\Omega \subset {\mathbb {R}}^{d}\) be a deformed configuration of a given body in d-dimensional space and \(\Omega _{0}\) the corresponding undeformed configuration. The boundaries of these configurations are \(\Gamma \) and \(\Gamma _{0}\), respectively. Let us consider deformed coordinates of an arbitrary point \({\varvec{x}\in \Omega }\) and a specific point, called incident, with coordinates \(\varvec{x}_{I}\). For reasons that will become clear, we also consider a potential function \(\phi \left( \varvec{x}_{I}\right) \), which is the solution of a scalar PDE. We are concerned with an approximation of the signed-distance function. The so-called gap function is now introduced as a differentiable function \(g\left[ \phi \left( \varvec{x}_{I}\right) \right] \) such that [3]:

$$\begin{aligned} g\left[ \phi \left( \varvec{x}_{I}\right) \right] {\left\{ \begin{array}{ll} <0 &{} \quad \varvec{x}_{I}\in \Omega \\ =0 &{} \quad \varvec{x}_{I}\in \Gamma \\ >0 &{} \quad \varvec{x}_{I}\notin \Omega \cup \Gamma \end{array}\right. }. \end{aligned}$$
(1)

If a unique normal \(\varvec{n}\left( \varvec{x}_{I}\right) \) exists for \(\varvec{x}_{I}\in \Gamma \), we can decompose the gradient of \(g\left[ \phi \left( \varvec{x}_{I}\right) \right] \) into parallel (\(\parallel \)) and orthogonal (\(\perp \)) terms: \(\nabla g\left[ \phi \left( \varvec{x}_{I}\right) \right] =\left\{ \nabla g\left[ \phi \left( \varvec{x}_{I}\right) \right] \right\} _{\parallel }+\left\{ \nabla g\left[ \phi \left( \varvec{x}_{I}\right) \right] \right\} _{\perp }\), with . Since \(g\left[ \phi \left( \varvec{x}_{I}\right) \right] =0\) for \(\varvec{x}_{I}\in \Gamma \) we have on those points. In the frictionless case, the normal contact force component is identified as \(f_{c}\) and contact conditions correspond to the following complementarity conditions [4]:

$$\begin{aligned} g\left[ \phi \left( \varvec{x}_{I}\right) \right] f_{c}&=0 , \nonumber \\ f_{c}&\ge 0 , \nonumber \\ g\left[ \phi \left( \varvec{x}_{I}\right) \right]&\ge 0 , \end{aligned}$$
(2)

or equivalently, . The vector form of the contact force is given by \(\varvec{f}_{c}=f_{c}\nabla g\left[ \phi \left( \varvec{x}_{I}\right) \right] \). Assuming that a function \(g\left[ \phi \left( \varvec{x}_{I}\right) \right] \), and its first and second derivatives are available from the solution of a PDE, we approximately satisfy conditions (2) by defining the contact force as follows:

$$\begin{aligned} \varvec{f}_{c}= & {} f_{c}\nabla g\left[ \phi \left( \varvec{x}_{I}\right) \right] \nonumber \\ {}= & {} \kappa \min \left\{ 0,g\left[ \phi \left( \varvec{x}_{I}\right) \right] \right\} ^{2}\nabla g\left[ \phi \left( \varvec{x}_{I}\right) \right] \quad (\kappa >0), \end{aligned}$$
(3)

where \(\kappa \) is a penalty parameter for the Courant–Beltrami function [42, 43]. The Jacobian of this force is given by:

$$\begin{aligned} \varvec{J}= & {} \kappa \min \left\{ 0,g\left[ \phi \left( \varvec{x}_{I}\right) \right] \right\} ^{2}\nabla \otimes \nabla g\left[ \phi \left( \varvec{x}_{I}\right) \right] \nonumber \\{} & {} +2\kappa \min \left\{ 0,g\left[ \phi \left( \varvec{x}_{I}\right) \right] \right\} \nabla g\left[ \phi \left( \varvec{x}_{I}\right) \right] \otimes \nabla g\left[ \phi \left( \varvec{x}_{I}\right) \right] .\nonumber \\ \end{aligned}$$
(4)

Varadhan [44] established that the solution of the screened Poisson equation:

$$\begin{aligned} c_{L} \nabla ^2 \phi (\varvec{x}) - \phi (\varvec{x})&= 0\quad \text {in } \Omega \end{aligned}$$
(5a)
$$\begin{aligned} \phi (\varvec{x})&=1 \quad \text {on } \Gamma \end{aligned}$$
(5b)

produces an ADF given by \(-c_{L}\log \left[ \phi \left( \varvec{x}\right) \right] \). This property has been recently studied by Guler et al. [38] and Belyaev et al. [34, 39]. The exact distance is obtained as a limit:

$$\begin{aligned} d\left( \varvec{x}\right) =-\lim _{c_{L}\rightarrow 0}c_{L}\log \left[ \phi \left( \varvec{x}\right) \right] . \end{aligned}$$
(6)

which is the solution of the Eikonal equation. Proof of this limit is provided in Varadhan [44]. We transform the ADF to a signed-ADF, by introducing the sign of outer (\(+\)) or inner (−) consisting of

$$\begin{aligned} g\left( \varvec{x}\right) =\pm c_{L}\log \left[ \phi \left( \varvec{x}\right) \right] . \end{aligned}$$
(7)

Note that if the plus sign is adopted in (7), then inner points will result in a negative gap \(g(\varvec{x}\)). The gradient of \(g(\varvec{x})\) results in

$$\begin{aligned} \nabla g\left( \varvec{x}\right) =\pm \frac{c_{L}}{\phi \left( \varvec{x}\right) }\nabla \phi \left( \varvec{x}\right) \end{aligned}$$
(8)

and the Hessian of \(g(\varvec{x})\) is obtained in a straightforward manner:

$$\begin{aligned}{} & {} \nabla \otimes \nabla \left[ g\left( \varvec{x}\right) \right] \nonumber \\ {}{} & {} \quad =\pm \frac{c_{L}}{\phi \left( \varvec{x}\right) }\left\{ \nabla \otimes \nabla \left[ \phi \left( \varvec{x}\right) \right] -\frac{1}{\phi \left( \varvec{x}\right) } \nabla \phi \left( \varvec{x}\right) \otimes \nabla \phi \left( \varvec{x}\right) \right\} . \nonumber \\ \end{aligned}$$
(9)

Note that \(c_L\) is a square of a characteristic length, i.e. \(c_L=l_c^2\), which is here taken as a solution parameter.

Using a test field \(\delta \phi (\varvec{x})\), the weak form of (5) is written as

$$\begin{aligned}{} & {} \int _{\Omega }c_{L}\nabla \phi \left( \varvec{x}\right) \cdot \nabla \delta \phi \left( \varvec{x}\right) \textrm{d}V+\int _{\Omega }\phi \left( \varvec{x}\right) \delta \phi \left( \varvec{x}\right) \textrm{d}V\nonumber \\{} & {} =\int _{\Gamma }c_{L}\delta \phi \,\nabla \phi \left( \varvec{x}\right) \cdot \varvec{n}\left( \varvec{x}\right) \textrm{d}A, \end{aligned}$$
(10)

where \(\text {d}A\) and \(\text {d}V\) are differential area and volume elements, respectively. Since an essential boundary condition is imposed on \(\Gamma \) such that \(\phi \left( \varvec{x}\right) =1\) for \(x\in \Gamma \), it follows that \(\delta \phi (\varvec{x}) = 0\) on \(\Gamma \) and the right-hand-side of (10) vanishes.

3 Discretization

In an interference condition, each interfering node, with coordinates \(\varvec{x}_{N}\), will fall within a given continuum element. The parent-domain coordinates \({\varvec{\xi }}\) for the incident node \(\varvec{x}_{I}\) also depends on element nodal coordinates. Parent-domain coordinates are given by:

$$\begin{aligned} {\varvec{\xi }}_{I}&= \underset{\varvec{\xi }}{\text {argmin}}\left[ \left\| \varvec{x}\left( \varvec{\xi }\right) -\varvec{x}_{I}\right\| \right] , \end{aligned}$$
(11a)

and it is straightforward to show that for a triangle,

$$\begin{aligned} \xi _{I1}&= \frac{x_{I2}\left( x_{11}-x_{31}\right) +x_{12}x_{31}-x_{11}x_{32}+x_{I1}\left( x_{32}-x_{12}\right) }{x_{11}\left( x_{22}-x_{32}\right) +x_{12}\left( x_{31}-x_{21}\right) +x_{21}x_{32}-x_{22}x_{31}}, \end{aligned}$$
(11b)
$$\begin{aligned} \xi _{I2}&=\frac{x_{I2}\left( x_{21}-x_{11}\right) +x_{12}x_{21}-x_{11}x_{22} +x_{I1}\left( x_{12}-x_{22}\right) }{x_{11}\left( x_{22}-x_{32}\right) +x_{12}\left( x_{31}-x_{21}\right) +x_{21}x_{32}-x_{22}x_{31}}, \end{aligned}$$
(11c)

with similar expressions for a tetrahedron [45]. The continuum element interpolation is as follows:

$$\begin{aligned} \varvec{x}\left( \varvec{\xi }\right) =\sum _{K=1}^{d+1}N_{K}\left( \varvec{\xi }\right) \varvec{x}_{K}=\varvec{N}\left( \varvec{\xi }\right) \cdot \varvec{x}_{N}, \end{aligned}$$
(12)

where \(N_{K}\left( \varvec{\xi }\right) \) with \(K=1,\ldots ,d+1\) are the shape functions of a triangular or tetrahedral element. Therefore, (11a) can be written as:

$$\begin{aligned} \varvec{\xi }_{I} = \underset{\varvec{\xi }}{\text {argmin}}\left[ \left\| \varvec{N}\left( \varvec{\xi }\right) \cdot \varvec{x}_{N}-\varvec{x}_{I}\right\| \right] =\varvec{a}_N\left( \varvec{x}_C\right) \end{aligned}$$
(13)

In (13), we group the continuum node coordinates and the incident node coordinates in a single array \(\varvec{x}_{C}=\left\{ \begin{array}{cc} \varvec{x}_{N}&\varvec{x}_{I}\end{array}\right\} ^{T}\)with cardinality \(\#\varvec{x}_{C}=d\left( d+2\right) \). We adopt the notation \(\varvec{x}_{N}\) for the coordinates of the continuum element. For triangular and tetrahedral discretizations, \(\varvec{\xi }_{I}\) is a function of \(\varvec{x}_{N}\) and \(\varvec{x}_{I}\):

$$\begin{aligned} \varvec{\xi }_{I}=\varvec{a}_{N}\left( \left\{ \begin{array}{c} \varvec{x}_{N}\\ \varvec{x}_{I} \end{array}\right\} \right) =\varvec{a}_{N}\left( \varvec{x}_{C}\right) . \end{aligned}$$
(14)

The first and second derivatives of \(\varvec{a}_{N}\) with respect to \(\varvec{x}_{C}\) make use of the following notation:

$$\begin{aligned} \varvec{A}_{N}\left( \varvec{x}_{C}\right)&=\frac{\textrm{d}\varvec{a}_{N}\left( \varvec{x}_{C}\right) }{\textrm{d}\varvec{x}_{C}}\qquad \left( d\times \left[ d\left( d+2\right) \right] \right) , \end{aligned}$$
(15)
$$\begin{aligned} {\mathcal {A}}_{N}\left( \varvec{x}_{C}\right)&=\frac{\textrm{d}\varvec{A}_{N}\left( \varvec{x}_{C}\right) }{\textrm{d}\varvec{x}_{C}}\qquad \left( d\times \left[ d\left( d+2\right) \right] ^{2} \right) . \end{aligned}$$
(16)

Source code for these functions is available in Github [45]. A mixed formulation is adopted with equal-order interpolation for the displacement \(\varvec{u}\) and the function \(\phi \). For a set of nodal displacements \(\varvec{u}_{N}\) and nodal potential values \(\varvec{\phi }_{N}\):

$$\begin{aligned} \varvec{u}\left( \varvec{x}_{C}\right)&=\varvec{N}_{u}\left( \varvec{x}_{C}\right) \cdot \varvec{u}_{N}, \end{aligned}$$
(17a)
$$\begin{aligned} \phi \left( \varvec{x}_{C}\right)&=\varvec{N}_{\phi }\left( \varvec{x}_{C}\right) \cdot \varvec{\phi }_{N}, \end{aligned}$$
(17b)

where in three dimensions

$$\begin{aligned} \varvec{N}_{u}\left( \varvec{x}_{C}\right)&=\left[ \begin{array}{ccccc} \cdots &{} N_{K}\left[ \varvec{a}_{N}\left( \varvec{x}_{C}\right) \right] &{} 0 &{} 0 &{} \cdots \\ \cdots &{} 0 &{} N_{K}\left[ \varvec{a}_{N}\left( \varvec{x}_{C}\right) \right] &{} 0 &{} \cdots \\ \cdots &{} 0 &{} 0 &{} N_{K}\left[ \varvec{a}_{N}\left( \varvec{x}_{C}\right) \right] &{} \cdots \end{array}\right] ,\nonumber \\\end{aligned}$$
(17c)
$$\begin{aligned} \varvec{N}_{\phi }\left( \varvec{x}_{C}\right)&=\left[ \begin{array}{ccc} \cdots&N_{K}\left[ \varvec{a}_{N}\left( \varvec{x}_{C}\right) \right]&\cdots \end{array}\right] ^{T}, \end{aligned}$$
(17d)

where \(\varvec{\xi }_{I}=\varvec{a}_{N}\left( \varvec{x}_{C}\right) \). First and second derivatives of \(N_{K}\left[ \varvec{a}_{N}\left( \varvec{x}_{C}\right) \right] \) are determined from the chain rule:

$$\begin{aligned} \frac{\textrm{d}N_{K}}{\textrm{d}\varvec{x}_{C}}&=\frac{\textrm{d}N_{K}}{\textrm{d}\varvec{\xi }_{I}}\cdot \varvec{A}_{N}\left( \varvec{x}_{C}\right) ,\end{aligned}$$
(18)
$$\begin{aligned} \frac{\textrm{d}^{2}N_{K}}{\textrm{d}\varvec{x}_{C}^{2}}&=\varvec{A}_{N}^{T}\left( \varvec{x}_{C}\right) \cdot \frac{\textrm{d}^{2}N_{K}}{\textrm{d}\varvec{\xi }_{I}^{2}} \cdot \varvec{A}_{N}\left( \varvec{x}_{C}\right) +\frac{\textrm{d}N_{K}}{\textrm{d}\varvec{\xi }_{I}}\cdot {\mathcal {A}}_{N}\left( \varvec{x}_{C}\right) . \end{aligned}$$
(19)

For the test function of the incident point, we have

$$\begin{aligned} \delta \phi \left( \varvec{x}_{C}\right) =\varvec{N}_{\phi }\left( \varvec{x}_{C}\right) \cdot \delta {\varvec{\phi }}_{N} +\varvec{\phi }_{N}\cdot \frac{\textrm{d}\varvec{N}_{\phi }\left( \varvec{x}_{C}\right) }{\textrm{d}\varvec{x}_{C}}\cdot \delta \varvec{x}_{C}. \end{aligned}$$
(20)

For linear continuum elements, the second variation of \(\phi \left( \varvec{\xi }\right) \) is given by the following rule:

$$\begin{aligned} \textrm{d}\delta \phi \left( \varvec{x}_{C}\right) =\,&\, \delta \varvec{\phi }_{N}\cdot \frac{\textrm{d}\varvec{N}_{\phi }\left( \varvec{x}_{C}\right) }{\textrm{d}\varvec{x}_{C}}\cdot \textrm{d}\varvec{x}_{C}+\textrm{d}\varvec{\phi }_{N}\cdot \frac{\textrm{d}\varvec{N}_{\phi } \left( \varvec{x}_{C}\right) }{\textrm{d}\varvec{x}_{C}}\cdot \delta \varvec{x}_{C} \nonumber \\&+\varvec{\phi }_{N}\cdot \frac{\textrm{d}^{2}\varvec{N}_{\phi }\left( \varvec{x}_{C}\right) }{\textrm{d}\varvec{x}_{C}^{2}}: \left( \delta \varvec{x}_{C}\otimes \textrm{d}\varvec{x}_{C}\right) \end{aligned}$$
(21)

Since the gradient of \(\phi \) makes use of the continuum part of the formulation, we obtain:

(22)

where \(\varvec{j}\) is the Jacobian matrix in the deformed configuration. The element residual and stiffness are obtained from these quantities and available in Github [45]. Use is made of the Acegen [46] add-on to Mathematica [47] to obtain the source code for the final expressions.

4 Algorithm and step control

All nodes are considered candidates and all elements are targets. A simple bucket-sort strategy is adopted to reduce the computational cost. In addition,

Step control is required to avoid the change of target during Newton-Raphson iteration. The screened Poisson equation is solved for all bodies in the analyses. Figure 1 shows the simple mesh overlapping under consideration. The resulting algorithm is straightforward. Main characteristics are:

  • All nodes are candidate incident nodes.

  • All elements are generalized targets.

Fig. 1
figure 1

Relevant quantities for the definition of a contact discretization for element e

The modifications required for a classical nonlinear Finite Element software (in our case SimPlas [48]) to include this contact algorithm are modest. Algorithm 1 presents the main tasks. In this Algorithm, blue boxes denote the contact detection task, which here is limited to:

  1. 1.

    Detect nearest neighbor (in terms of distance) elements for each node. A bucket ordering is adopted.

  2. 2.

    Find if the node is interior to any of the neighbor elements by use of the shape functions for triangles and tetrahedra. This is performed in closed form.

  3. 3.

    If changes occurred in the targets, update the connectivity table and the sparse matrix addressing. Gustavson’s algorithm [49] is adopted to perform the updating in the assembling process.

In terms of detection, the following algorithm is adopted:

  1. 1.

    Find all exterior faces, as faces sharing only one tetrahedron.

  2. 2.

    Find all exterior nodes, as nodes belonging to exterior faces.

  3. 3.

    Insert all continuum elements and all exterior nodes in two bucket lists. Deformed coordinates of nodes and deformed coordinates of element centroids are considered.

  4. 4.

    Cycle through all exterior nodes

    1. (a)

      Determine the element bucket from the node coordinates

    2. (b)

      Cycle through all elements (e) in the \(3^{3}=27\) buckets surrounding the node

      1. i.

        If the distance from the node to the element centroid is greater than twice the edge size, go to the next element

      2. ii.

        Calculates the projection on the element (\(\varvec{\xi }_{I}\)) and the corresponding shape functions \(\varvec{N}\left( \varvec{\xi }_{I}\right) \).

      3. iii.

        If \(0\le N_{K}\left( \varvec{\xi }_{I}\right) \le 1\) then e is the target element. If the target element has changed, then flag the solver for connectivity update.

Since the algorithm assumes a fixed connectivity table during Newton iterations, a verification is required after each converged iteration to check if targets have changed since last determined. If this occurs, a new iteration sequence with revised targets is performed.

figure a

The only modification required to a classical FEM code is the solution of the screened-Poisson equation, the green box in Algorithm 1. The cost of this solution is negligible when compared with the nonlinear solution process since the equation is linear and scalar. It is equivalent to the cost of a steady-state heat conduction solution. Note that this corresponds to a staggered algorithm.

5 Numerical tests

Numerical examples are solved with our in-house software, SimPlas [48], using the new node-to-element contact element. Only triangles and tetrahedra are assessed at this time, which provide an exact solution for \(\varvec{\xi }_{I}\). Mathematica [47] with the add-on Acegen [46] is employed to obtain the specific source code. All runs are quasi-static and make use of a Neo-Hookean model. If \(\varvec{C}\) is the right Cauchy-Green tensor, then

where \(\psi \left( \varvec{C}\right) =\frac{\mu }{2}\left( \varvec{C}:\varvec{I}-3\right) -\mu \log \left( \sqrt{\det \varvec{C}}\right) +\frac{\chi }{2}\log ^{2}\left( \sqrt{\det \varvec{C}}\right) \) with \(\mu =\frac{E}{2(1+\nu )}\) and \(\chi =\frac{E\nu }{(1+\nu )(1-2\nu )}\) being constitutive properties.

5.1 Patch test satisfaction

We employ a corrected penalty so that the contact patch test is satisfied in most cases. This is an important subject for convergence of computational contact solutions and has been addressed here with a similar solution to the one discussed by Zavarise and co-workers [16, 17].

We remark that this is not a general solution and in some cases, our formulation may fail to pass the patch test. Figure 2 shows the effect of using a penalty weighted by the edge projection, see [16]. However, this is not an universal solution.

Fig. 2
figure 2

Patch test satisfaction by using a corrected penalty

5.2 Two-dimensional compression

We begin with a quasi-static two-dimensional test, as shown in Fig. 3, using three-node triangles. This test consists of a compression of four polygons (identified as part 3) in Fig. 3 by a deformable rectangular punch (part 1 in the same Figure). The ‘U’ (part 2) is considered rigid but still participates in the approximate distance function (ADF) calculation. To avoid an underdetermined system, a small damping term is used, specifically 40 units with \({\textsf{L}}^{-2}{\textsf{M}}{\textsf{T}}^{-1}\) ISQ dimensions. Algorithm 1 is adopted with a pseudo-time increment of \(\Delta t=0.003\) for \(t\in [0,1]\).

For \(h=0.020\), \(h=0.015\) and \(h=0.010\), Fig. 4 shows a sequence of deformed meshes and the contour plot of \(\phi (\varvec{x})\). The robustness of the algorithm is excellent, at the cost of some interference for coarse meshes. To further inspect the interference, the contour lines for \(g(\varvec{x})\) are shown in Fig. 5. We note that coarser meshes produce smoothed-out vertex representation, which causes the interference displayed in Fig. 4. Note that \(g(\varvec{x})\) is determined from \(\phi (\varvec{x})\).

Fig. 3
figure 3

Two-dimensional verification problem. Consistent units are used

Fig. 4
figure 4

Two-dimensional compression: sequence of deformed meshes and contour plot of \(\phi (\varvec{x})\)

Fig. 5
figure 5

Gap (\(g(\varvec{x})\)) contour lines for the four meshes using \(l_{c}=0.3\) consistent units

Using the gradient of \(\phi \left( \varvec{x}\right) \), the contact direction is obtained for \(h=0.02\) as shown in Fig. 6. We can observe that for the star-shaped figure, vertices are poorly represented, since small gradients are present due to uniformity of \(\phi \left( \varvec{x}\right) \) in these regions. The effect of mesh refinement is clear, with finer meshes producing a sharper growth of reaction when all four objects are in contact with each other. In contrast, the effect of the characteristic length \(l_{c}\) is not noticeable.

In terms of the effect of \(l_c\) on the fields \(\phi (\varvec{x})\) and \(g(\varvec{x})\), Fig. 7 shows that information, although \(\phi (\varvec{x})\) is strongly affected by the length parameter, \(g(\varvec{x})\) shows very similar spatial distributions although different peaks. Effects of h and \(l_c\) in the displacement/reaction behavior is shown in Fig. 8. The mesh size h has a marked effect on the results up to \(h=0.0125\), the effect of \(l_c\) is much weaker.

Fig. 6
figure 6

Directions obtained as \(\nabla \phi \left( \varvec{\xi }\right) \) for \(h=0.020,\) 0.015 and 0.010

5.3 Three-dimensional compression

In three dimensions, the algorithm is in essence the same. Compared with geometric-based algorithms, it significantly reduces the coding for the treatment of particular cases (node-to-vertex, node-to-edge in both convex and non-convex arrangements). The determination of coordinates for each incident node is now performed on each tetrahedra, but the remaining tasks remain unaltered. We test the geometry shown in Fig. 9 with the following objectives:

  • Assess the extension to the 3D case. Geometrical nonsmoothness is introduced with a cone and a wedge.

  • Quantify interference as a function of \(l_{c}\) and \(\kappa \) as well as the strain energy evolution.

Deformed configurations and contour plots of \(\phi \left( \varvec{x}\right) \) for this problem are presented in Fig. 9, and the corresponding CAD files are available on Github [45]. A cylinder, a cone and a wedge are compressed between two blocks. Dimensions of the upper and lower blocks are \(10\times 12\times 2\) consistent units (the upper block is deformable whereas the lower block is rigid) and initial distance between blocks is 8 consistent units. Length and diameter of the cylinder are 7.15 and 2.86 (consistent units), respectively. The cone has a height of 3.27 consistent units and a radius of 1.87. Finally, the wedge has a width of 3.2, a radius of 3.2 and a swept angle of 30 degrees. A compressible Neo-Hookean law is adopted with the following properties:

  • Blocks: \(E=5\times 10^{4}\) and \(\nu =0.3\).

  • Cylinder, cone and wedge: \(E=1\times 10^{5}\) and \(\nu =0.3\).

The analysis of gap violation, \(v_{\max }=\sup _{\varvec{x}\in \Omega }\left[ -\min \left( 0,g\right) \right] \) as a function of pseudotime \(t\in [0,1]\) is especially important for assessing the robustness of the algorithm with respect to parameters \(l_{c}\) and \(\kappa \). For the interval \(l_{c}\in [0.05,0.4]\), the effect of \(l_{c}\) is not significant, as can be observed in Fig. 10. Some spikes are noticeable around \(t=0.275\) for \(l_{c}=0.100\) when the wedge penetrates the cone. Since \(\kappa \) is constant, all objects are compressed towards end of the simulation, which the gap violation. In terms of \(\kappa \), effects are the same as in classical geometric-based contact. In terms of strain energy, higher values of \(l_{c}\) result in lower values of strain energy. This is to be expected, since smaller gradient values are obtained and the contact force herein is proportional to the product of the gradient and the penalty parameter. Convergence for the strain energy as a function of h is presented in Fig. 11. It is noticeable that \(l_{c}\) has a marked effect near the end of the compression, since it affects the contact force.

5.4 Two-dimensional ironing benchmark

This problem was proposed by Yang et al. [50] in the context of surface-to-surface mortar discretizations. Figure 12 shows the relevant geometric and constitutive data, according to [50] and [51]. We compare the present approach with the results of these two studies in Fig. 13. Differences exist in the magnitude of forces, and we infer that this is due to the continuum finite element technology. We use finite strain triangular elements with a compressible neo-Hookean law [52]. The effect of \(l_{c}\) is observed in Fig. 13. As before, only a slight effect is noted in the reaction forces. We use the one-pass version of our algorithm, where the square indenter has the master nodes and targets are all elements in the rectangle. Note that, since the cited work includes friction, we use here a simple model based on regularized tangential law with a friction coefficient, \(\mu _{f}=0.3\).

Fig. 7
figure 7

Effect of \(l_{c}\) in the form of \(\phi (\varvec{x})\) and \(g(\varvec{x})\)

Fig. 8
figure 8

Effect of h in (a) and effect of \(l_{c}\) in (b) in the displacement/reaction behavior

5.5 Three-dimensional ironing benchmark

We now perform a test of a cubic indenter on a soft block. This problem was proposed by Puso and Laursen [18, 19] to assess a mortar formulation based on averaged normals. The frictionless version is adopted [19], but we choose the most demanding case: \(\nu =0.499\) and the cubic indenter. Relevant data is presented in Fig. 14. The rigid \(1\times 1\times 1\) block is located at 1 unit from the edge and is first moved down 1.4 units. After this, it moves longitudinally 4 units inside the soft block. The soft block is analyzed with two distinct meshes: \(4\times 6\times 20\) divisions and \(8\times 12\times 40\) divisions. Use is made of one plane of symmetry. A comparison with the vertical force in [19] is performed (see also [18] for a clarification concerning the force components). We allow some interference to avoid locking with tetrahedra. In [19], Puso and Laursen employed mixed hexahedra, which are more flexible than the crossed-tetrahedra we adopt here. Figure 15 shows the comparison between the proposed approach and the mortar method by Puso and Laursen [19]. Oscillations are caused by the normal jumps in gradient of \(\phi (\varvec{x})\) due to the classical \({\mathcal {C}}^0\) finite-element discretization (between elements). Although the oscillations can be observed, the present approach is simpler than the one in Puso and Laursen.

Fig. 9
figure 9

Data for the three-dimensional compression test. a Undeformed and deformed configurations (\(h=0.025\)). For geometric files, see [45]. b Detail of contact of cone with lower block and with the wedge. Each object is identified by a different color

Fig. 10
figure 10

a Effect of \(l_{c}\) on the maximum gap evolution over pseudotime \((\kappa =0.6\times 10^{6}\), \(h=0.030)\) and b Effect of \(\kappa \) on the maximum gap evolution over pseudotime \((l_{c}=0.2\))

Fig. 11
figure 11

a Effect of \(l_{c}\) on the strain energy (\(h=0.3\)) and b Effect of h on the strain energy \((l_{c}=0.2\)). \(\kappa =0.6\times 10^{6}\) is used

Fig. 12
figure 12

Ironing benchmark in 2D: relevant data and deformed mesh snapshots

Fig. 13
figure 13

Ironing problem in 2D. Results for the load in terms of pseudotime are compared to the values reported in Yang et al. [50] and Hartmann et al. [51]. \(\kappa =0.6\times 10^{6}\) is used. a Effect of h in the evolution of the horizontal (\(R_{x}\)) and vertical (\(R_{y}\)) reactions and b Effect of \(l_{c}\) in the evolution of the horizontal (\(R_{x}\)) and vertical (\(R_{y}\)) reactions for \(h=0.1667\))

Fig. 14
figure 14

3D ironing: cube over soft block. Relevant data and results

Fig. 15
figure 15

3D ironing: cube over soft block. Vertical reactions compared with results in Puso and Laursen [19]

6 Conclusions

We introduced a discretization and gap definition for a contact algorithm based on the solution of the screened Poisson equation. After a log-transformation, this is equivalent to the solution of a regularized Eikonal equation and therefore provides a distance to any obstacle or set of obstacles. This approximate distance function is smooth and is differentiated to obtain the contact force. This is combined with a Courant–Beltrami penalty to ensure a differentiable force along the normal direction. These two features are combined with a step-control algorithm that ensures a stable target-element identification. The algorithm avoids most of the geometrical calculations and housekeeping, and is able to solve problems with nonsmooth geometry. Very robust behavior is observed and two difficult ironing benchmarks (2D and 3D) are solved with success. Concerning the selection of the length-scale parameter \(l_{c}\), which produces an exact solution for \(l_{c}=0\), we found that it should be the smallest value that is compatible with the solution of the screened Poisson equation. Too small of a \(l_{c}\) will produce poor results for \(\phi \left( \varvec{x}\right) \). Newton–Raphson convergence was found to be stable, as well as nearly independent of \(l_{c}\). In terms of further developments, a \({\mathcal {C}}^2\) meshless discretization is important to reduce the oscillations caused by normal jumps and we plan to adopt the cone projection method developed in [7] for frictional problems.