Abstract
We present a version of the Discrete Element Method considering the particles as rigid polyhedra. The Principle of Virtual Work is employed as basis for a multibody dynamics model. Each particle surface is split into subregions, which are tracked for contact with other subregions of neighboring particles. Contact interactions are modeled pointwise, considering vertexface, edgeedge, vertexedge and vertexvertex interactions. General polyhedra with triangular faces are considered as particles, permitting multiple pointwise interactions which are automatically detected along the model evolution. We propose a combined interface law composed of a penalty and a barrier approach, to fulfill the contact constraints. Numerical examples demonstrate that the model can handle normal and frictional contact effects in a robust manner. These include simulations of convex and nonconvex particles, showing the potential of applicability to materials with complex shaped particles such as sand and railway ballast.
Introduction
Modeling of granular materials is challenging, particularly when trying to represent continuum mechanics behaviour. Complex constitutive models appear on this context, usually involving many fitting parameters.
The Discrete Element Method (DEM) provides unique possibilities to handle micromechanical behavior of granular media (see [1] for the method origins, [2] for a useful review of spherical DEM modeling among other topics and [3, 4] as textbooks in molecular dynamics, bringing many similarities and common strategies of DEM).
Numerous implementations of DEM exist, considering spherical particles due to their geometrical simplicity. Usually the computational bottleneck of a DEM solver is associated with spatial searching/treatment of contact between particles. Spherical particles are welcome in this context, since evaluating their overlap/proximity is simple and straightforward. When the particle shape deviates from a sphere, usually contact computation time is strongly increased.
Unfortunately spherical particle modeling is not sufficient for many applications. There are challenging practical problems of interest that motivate the development of DEM considering arbitraryshaped particles, such as sandy soil and railway ballast mechanical interactions, and hopper discharge problems, just to mention a few examples that are heavily dependent on the particle shape. Indeed, mechanical macroscale behavior of such materials is evidenced in packing density, compressibility, critical state friction angle and other mechanical properties. According to [5], the macroscale properties result from particle interactions, which are affected by the particle shape.
When there is a need to consider nonspherical particles in DEM models, several alternatives can be utilized. Clustering spheres can be employed, see, e.g. [6]. In [7] normal impact of quasispherical particles is explored. Forming clusters of tetrahedra with distinct shapes is introduced in the work of [8]. Clusters of spheres are employed in the context of distinct integration techniques in [9] and, finally, in [10] a scheme is proposed that represents surfaces as spheres from a triangular surface mesh. Clusters of spheres have the advantage of the simplicity of local treatment of spherical particles, but always lack continuity in particle curvature and definition of tangents at the boundary surface. According to [11] this may lead to clumping of particles.
Another possibility is the usage of polyhedra to describe particles. In this case, surface singularities (vertices, edges) are present. This approach may be interesting when real particulate material consists of such polyhedra.
Modeling of DEM with polyhedra can be found in [12] and [13], which explore convex and nonconvex (by convex decomposition) polyhedra simulation. In [14] convex polyhedra are addressed. In [15] and [16] efforts are presented to modeling convex and nonconvex DEMbased polyhedra. A complementary alternative to DEM to model polyhedral particles (among other applications) is the usage of the Contact Dynamics Method (see, e.g. [17, 18]). In this case, one solves for the velocity function, and imposes the balance of momentum at each timestep. Implicit schemes for timeintegration are available, which permits employing large timesteps. A review of such method, including a discussion on comparisons with DEM and applications for polygons is found in [19].
When the particle shape is convex, a convenient and elegant description given by superellipsoids was proposed in [20] and applied to DEM simulations in [21]. Superellipsoids were also addressed in [22]. An initiative for dealing with the geometry of the particles in a more detailed way was presented in [23], which describes the called “granular element method”, as a variant of DEM. It considers each grain’s boundary as described by a nonuniform rational Bspline (NURBS) surface. This allows DEM to consider realistic and complex granular shapes. In [11] the nonconvexity of particle shapes is addressed by a particular contact algorithm (knotsurface, in the context of nonuniform rational Bsplines (NURBS) surfaces), which permits important advances in correlation with experimental results of sand specimen tests. Another effort to describe arbitraryshaped particles is given in [24], which considers level set discrete elements.
When establishing a DEM model for applications in which particle shape plays a role, one can also approximate the physical behaviour by spherical models. However, in this case there is the main drawback that rolling resistance models are needed to achieve the expected macro mechanical properties, which may lead to problemdependent calibration. For more complex particle shapes, particularly involving nonconvex geometries, there is a need to handle contact detection and in a different, more time consuming way. It is highly desired, however, to minimize the need for model calibration parameters. This challenge was addressed in [25], which shows the importance of particle shapes when considering shear band evaluation is sand specimen by numerical simulations. In the context of railway ballast simulation, in [26] model parameters are discussed for shear tests, as also addressed in [27] and [28]. In [29] computational strategies are proposed in the context of railway ballast simulation to handle numerous contact interactions. Sand material simulation employing clusters of spheres was addressed in [30], which led to the effort in representing arbitrary shapes, even composed of spheres. Particle shape is important also for hopper discharge simulations. In [31] one can find comparison of clusters of spheres and polyhedra for hopper experiments, representing an interesting discussion on the influence of particle shape and its modeling issues.
Proposed DEM model
Motivated by the need for a realistic shape representation of particles in the aforementioned problems, a new methodology will be developed which includes general polyhedra with convex and nonconvex shapes in a discrete element formulation. The model incorporates numerous pointwise contact interactions between particles, according to their shape and orientation. This is achieved by a strategy of division of the external surface of each particle into subregions which are described by faces, edges and vertices, as natural geometric components of general polyhedra. All of these entities are considered during the overall contact search.
Each pointwise contact interaction is addressed by degenerations of a specific surfacetosurface treatment, see [32, 33], which is a novel approach in the DEM context. We provide a novel systematic usage of the mastertomaster degenerated technique, particularly in the DEM context. This is done by the combination of several kinds of contact degenerations to handle the possibly numerous interactions between general polyhedra.
Particles are considered as rigid bodies. A novel treatment for the contact interface law is proposed. At the interface a hybrid normal direction law is formulated which is composed of a physically or numerically (penalty) ruled part and a Barrierbased part. The Barrierbased approach prevents penetration between particles and creates a thin layer of contact activation, borrowing ideas from molecular dynamics and collision detection in the context of computer graphics. The proposed method is tested with basic and complex examples to show its robustness and generality.
Nomenclature
In the context of a 3D Euclidean space, the present work uses the following nomenclature: scalar variables are nonbold (e.g.: v), vectors are lowercase bold (e.g.: \(\varvec{v}\)) and secondorder tensors are uppercase bold (e.g.: \(\varvec{V}\)). Column matrices (termed as “vectors”) are also lowercase bold and matrices with more than one column are also uppercase bold. Zero columnmatrices are denoted by \(\varvec{o}_s\), where s in the number of rows.
The derivative of a quantity \(\varvec{a}\) with respect to a quantity \(\varvec{b}\) is denoted by \(\varvec{a}_{,\varvec{b}}\). The variation of a quantity \(\varvec{a}\) is indicated by \(\delta \varvec{a}\).
Model description
In present work, discrete elements are named “particles” or “bodies”, with the same meaning. The timeevolution behavior of a DEM system is obtained by solving the equations of motion. For rigid bodies the equations of motion are written using the NewtonEuler description, see, e.g.: [34] and [3]. The dynamical behaviour of each rigid body is ruled by a set of six differential equations in time. In some applications rotational part of the motion can be neglected, especially for very small particles. In such case, only three differential equations of translational motion have to be used for each body. In present work translation and rotation are considered for the description of the particle motion while assuming rigid body behavior.
In general an analytically integration of the differential equations of motion is not possible due to complexity and frequent changes of contributions (forces/moments) acting on each body. Therefore, one usually has to solve the strongly coupled set of nonlinear differential equations by adopting a numerical scheme.
In the present work a weak form is employed to describe the equations of motion which is equivalent to Principle of Virtual Work (PVW). This is a common way to describe a multibody system in nonlinear solid mechanics—particularly when employing numerical methods to consider the flexibility of bodies—such as using the Finite Element Method (FEM)—see, e.g.: [35] and [36].
The general weak form which describes a multibody system composed of rigid or flexible bodies is given by:
where \(\delta W_i\) is the total virtual work of internal forces, \(\delta W_e\) is the total virtual work of external loads and the term \(\delta T\) describes the total inertial contribution to the model. All mechanical contact interactions are included in \(\delta W_c\). Additional terms can be considered, such as fluid loads. Each term in (1) contains contributions stemming from all bodies of the system.
Let N be the number of particles in the system. With that, the contributions in (1) can be written as a sum
where the superscript B denotes the virtual work contribution of body B. The term \(\delta W_c^{AB}\) is related to the virtual work of the contact interactions for a pair of bodies A and B.
Next, we will detail the methodology employed for the evaluation of each of the contributions in (2). Based on the rigid body assumption \(\delta W_i^B\) is equal to zero. Section 2.1 will provide details on the evaluation of \(\delta T^B\), Sect. 2.3 discusses the external loads and their contributions \(\delta W_e^B\) and Sect. 2.4 describes the methodology employed to address contact between bodies, leading to the contributions in \(\delta W_c^{AB}\).
Dynamics of a single rigid body
Details of the contribution \(\delta T^B\) related to each rigid body B are presented in this section. Rigid body motion and its causes are governed by NewtonEuler equations. As rotations have to be described in a 3D Euclidean space by distinct techniques (such as rotation vectors and quaternions). We will here present some details on these aspects, for completeness of the present work.
Kinematics
We adopt an updated Lagrangian description for the movement of material points. Let P be a generic material point in a body that is tracked along three successive configurations: the reference configuration r, the current configuration i and the next configuration \(i+1\).
The position of the material point P is denoted by \(\varvec{x}^r\), \(\varvec{x}^i\) and \(\varvec{x}^{i+1}\) for configurations r, i and \(i+1\), respectively. A general vector quantity \(\varvec{a}^{r}\) is associated with the material point P at the reference configuration. This quantity experiences rigid body rotations when the configuration changes to i, leading to \(\varvec{a}^{i}\) and to \(i+1\), leading to \(\varvec{a}^{i+1}\).
The translatory motion is given by the kinematic relation
where \(\varvec{u}^{\varDelta }\) is the incremental displacement of the material point P from time \(t_i\) to \(t_{i+1}\).
The evolution of the vector \(\varvec{a}^{r}\) is described by the rotation tensor as
The multiplicative decomposition of the rotation tensor holds
permitting the description of partial rotations.
The link between the rotation tensor operator and kinematic quantities to describe a rotation is provided by a vector parameterization, here given by the called Rodrigues rotation vector \(\varvec{\alpha }\). This description has been already employed in the context of beams and shell structural models, see, e.g.: [37,38,39,40,41,42]) and for rigid bodies in [43]. It was also employed in the context of spherical particles in [44] and [45].
Let a given rotation with magnitude \(\theta \) about an axis be represented by the unit vector \(\varvec{e}\) described by the Euler rotation vector \(\varvec{\theta }=\theta \varvec{e}\). For a Rodrigues rotation vector \(\varvec{\alpha }=\alpha \,\varvec{e}\), where \(\alpha =2 \tan {(\theta /2)}\), the corresponding rotation tensor is given by
where \(\varvec{I}\) is the identity tensor, \(\alpha =\Vert \varvec{\alpha }\Vert \) and \(\varvec{A}=skew(\varvec{\alpha })\). Equation (6) can applied to write rotation tensors \(\varvec{Q}^{i}\), \(\varvec{Q}^{\varDelta }\) and \(\varvec{Q}^{i+1}\). A convenient formula is available for updating the rotation vector directly, without usage of (5),
The timederivative of the vector \(\varvec{a}^{i+1}\) can be computed by considering the previous configuration vector \(\varvec{a}^{i}\) as constant. This follows from the updated Lagrangian description where a fixed current configuration i can be assumed. The angular velocity of \(\varvec{a}^{i+1}\) is described by \(\varvec{\varOmega }=skew(\varvec{\omega })\). With some algebraic work a useful relation between the angular velocity and the timederivative of the corresponding Rodrigues rotation vector can be found, see, e.g. [46],
where
with \(\varvec{A}^{\varDelta }=skew(\varvec{\alpha }^\varDelta )\).
Rigid body description
The herein utilized formulation was presented in a completed form in [43] and is here presented, briefly, for completeness. The reader interested in more details may refer to the original paper.
We describe the movement of a rigid body B by a choice of a material point P, named “pole”. This point does not need to be in the physical domain of the body, but it has to follow the rigid body constraints, that is, it experiences a rigid body movement as if it was a part of the body B. The center of mass of the body is defined as G.
Based on the above notation the position of the center of mass is given by
Assuming that the vector \(\varvec{b}^r\) changes only its orientation but keeps its magnitude along time (rigid body assumption) yields
where \(\varvec{b}^{r}\) is a quantity evaluated at the reference configuration. It denotes the center of mass position with respect to the pole P. A generic material point within the body is given by \(\varvec{x}^{i+1}\), such that:
As all material points experience the same rigid body rotation, the update of \(\varvec{s}^{i+1}\) is given by
Inertia tensor at configuration \(i+1\) is defined by
where \(\rho \) is the body material specific mass and \(\varvec{S}^{i+1}=skew(\varvec{s}^{i+1})\). The inertia tensor \(\varvec{J}^{i+1} \) at configuration \(i+1\) is related to the inertia tensor at the reference configuration \(\varvec{J}^{r}\) by
Hence it can be preevaluated for any rigid body just using \(\varvec{s}^{r}\) instead of \(\varvec{s}^{i+1}\) in (14). Note that we have to consider the pole P when evaluating \(\varvec{J}^{r}\) and \(\varvec{J}^{i+1}\). Thus, a particular (and convenient) choice for the pole is the center of mass, which simplifies the forthcoming equations.
The kinetic energy \(T^B\) of the rigid body B is
By inserting (10)–(15) into (16) the first variation can be evaluated for the kinetic energy. Using furthermore the angular velocity relation presented (8) leads after some algebra to
where
Here m is the mass of the body B, \(\varvec{\omega }\) is the body angular velocity, \(\dot{\varvec{\omega }}\) is its angular acceleration and \(\ddot{\varvec{u}}\) is the acceleration of the pole P. For later use, we define the inertial pseudomoment \(\varvec{\mu }_t^B=\varvec{\varXi }^T\varvec{m}_t^B\).
The particular choice of the pole P as the center of mass simplifies (18) yielding
which is a more standard representation of the equations of motion for a rigid body.
Note that the only kinematic variables employed in the evaluation of \(\delta T^B\) are the incremental displacements of the pole \(\varvec{u}_P^\varDelta \) and the incremental Rodrigues rotation parameters \(\varvec{\alpha }_P^\varDelta \) which are the chosen degrees of freedom (DOF). The timederivatives of these quantities have to be computed, see (18). Moreover, updating of the center of mass position \(\varvec{b}^{i+1}\) and of the inertia tensor \(\varvec{J}^{i+1}\) is also necessary. Both are obtained by the applying the reference configuration values and the rotation tensor \(\varvec{Q}^{i+1}\), as expressed in (11) and (15). Alternatively, one may save intermediate values at configuration i and then update employing the partial rotation \(\varvec{Q}^{\varDelta }\).
The inertia tensor is constant for a frame accompanying the rigid body motion, as employed in classical rigidbody derivations. However, the primary kinematic variable associated with rotation herein employed is the Rodrigues rotation vector, which is easily defined with respect to a fixed frame (differently from alternative rotation schemes that are naturally described following the body orientation). This choice makes more natural to work with the rotation tensor and the angular velocity vector written using the same basis associated with the fixed frame. This is the reason for what one must update the rotation tensor, accordingly.
Timeintegration scheme
In present work, we use the Newmark integration scheme, see e.g. [47]. This is based on approximations for timevariable quantities as a function of the adopted timestep \(\varDelta t\). Here we assume that a timestep \(\varDelta t\) governs the motion from the current configuration i and the next configuration \(i+1\). Within the Newmark method approximation formulae are
where the parameters \(\beta \) and \(\gamma \) determine the behavior of the integration scheme. Depending on the choice numerical damping can be introduced or it is possible to generate an implicit/explicit scheme. Our choice is \(\beta = 0.3\) and \(\gamma =0.5\), leading to an implicit method with small numerical damping.
Equation (20) can be rewritten such that velocity and acceleration of the unknown next configuration are given as a function of quantities from the previous configuration and the unknown current displacement \(\varvec{u}_P^\varDelta \) which follows from the equations of motion, see, e.g.[47]
where the following constants are used: \(\alpha _1=\frac{1}{\beta \left( \varDelta t\right) ^2}\), \(\alpha _2=\frac{1}{\beta \varDelta t}\), \(\alpha _3=\frac{12\beta }{2\beta }\), \(\alpha _4=\frac{\gamma }{\beta \varDelta t}\), \(\alpha _5=\left( 1\frac{\gamma }{\beta }\right) \) and \(\alpha _6=\left( 1\frac{\gamma }{2\beta }\right) \varDelta t\).
The proposed Newmark integration method can generally be applied for nonlinear problems, but lacks conservation of angular momentum. This drawback can be circumvented, see [48] and [49], where the Newmark method was modified for rotations in the context of simulation of beamlike structures and mechanisms involving joints together with flexible bodies, using Euler parameters. In [39] the modified Newmark method was reformulated for Rodrigues parameters which yields
Equations (21) and (22) are convenient since they can be employed directly in the expression (17) for the rigid body contribution to the model weak form. As a result, (17) can be written at configuration \(i+1\) as a function of the incremental displacements \(\varvec{u}^{\varDelta }_P\) and incremental rotations \(\varvec{\alpha }^{\varDelta }_P\). The time integration is then already embedded. All other quantities involved, such as \(\varvec{b}^{i+1}\) and \(\varvec{J}^{i+1}\) are functions of the incremental displacements and rotation parameters. The corresponding values from the previous configuration are considered to be constant and known.
External loads
The contribution of the external loads to the virtual work \(\delta W_e\) in (1) stems from each rigid body B, given by \(\delta W_e^B\). This term includes the virtual work of forces and moments
where \(\varvec{f}_e^B\) is the external force applied at pole P, \(\varvec{m}_e^B\) is the external moment applied at pole P. When using the same variables as in (18) \(\delta W_e^B\) is given by the relation
which introduces the external pseudomoment \(\varvec{\mu }_e^B=\varvec{\varXi }^T\varvec{m}_e^B\) that is energeticallyconjugated with \(\delta \varvec{\alpha }^\varDelta _P\). The pseudomoment depends on the amount of rotation experienced by the pole P, which is embedded in \(\varvec{\varXi }\) according to (9).
In the context of DEM, the weight of the particles is usually an important contribution of the external load. This is included in our model according to [43]. Weight in a particle can be represented by a single pointwise load applied at the center of mass. Since the pole P is arbitrary, one has to consider an equivalent system composed of a force and a moment applied at P. Considering a body with gravitational mass m (the same as inertial mass) and the vector \(\varvec{g}_e\) describing the gravitational field yields
Contact between bodies
The treatment of the interaction of many particles is essential for a successful DEM implementation. Complex strategies have to be developed, especially for the handling of contact between general polyhedra that are possibly nonconvex. The main idea is not to assume a single pointwise contact interaction for each pair of bodies AB (terms \(\delta W_c^{AB}\) in (2)) but instead to allow all pointwise contributions composing the term \(\delta W_c^{AB}\) to interact at the same time.
In the sequence, a mastermaster contact formulation is presented to handle single contact pointwise actionreaction. Next, a strategy to split the external surface of each particle in subregions is proposed, this allows to capture multiple contacts.
Mastermaster contact formulation
The mastermaster contact formulation is presented here for the particular case of contact between two rigid polyhedra. The reader interested in more details in the basics of this method may refer to [50, 51], in the context of beamtobeam contact.
For a triangular face of a rigid polyhedron at configuration \(i+1\) the following surface parameterization is proposed
where the quantities \(\varvec{x}_P^{i+1}\) and \(\varvec{Q}^{i+1}\) relate to the rigid body kinematics, described in Sect. 2.1. The vertices of the triangular face are denoted by the position vectors \(\varvec{x}_1^r\), \(\varvec{x}_2^r\) and \(\varvec{x}_3^r\) at reference configuration where the origin is the pole P. Arbitrary material points are described on the triangular surface by the functions \(N_1\), \(N_2\) and \(N_3\)
where the parameters \(\zeta \) and \(\theta \) are chosen from the range \((\zeta ,\theta ) \in (1,+1)\). They define a mapping from the parametric space (Fig. 1) to an actual material point in the threedimensional Euclidean space (Fig. 2).
Employing this parameterization for both contacting bodies, their external surfaces (faces) are represented locally. The polyhedra faces are given by \(\varGamma _A\left( \zeta _A,\theta _A\right) \) and \(\varGamma _B\left( \zeta _B,\theta _B\right) \). The parameters (which can be interpreted as convective coordinates) can be organized in vectors \(\varvec{c}_A=\left[ \begin{array}{cc} \zeta _A&\theta _A \end{array}\right] \) and \(\varvec{c}_B=\left[ \begin{array}{cc} \zeta _B&\theta _B \end{array}\right] \). The surfaces depend on the general DOF organized in vectors, here named as \(\varvec{d}_A\) and \(\varvec{d}_B\) which leads to
where the index A or B refers to each of the bodies, thus defining the incremental displacements and rotations of their poles, see Sect. 2.1. Hence the rigid body motions of the external surface of each body are consistent with the assumed kinematics.
Material points on both surfaces \(\varGamma _A\) and \(\varGamma _B\) are located by choosing particular values for the surface parameters. By inserting all parameters into a single vector \(\varvec{c}=\left[ \begin{array}{cccc} \zeta _A &{} \theta _A &{} \zeta _B &{} \theta _B \\ \end{array}\right] ^T\) and all DOF of bodies A and B into a single vector \(\varvec{d}\), such that \(\varvec{d}^T=\left[ \begin{array}{cc} \varvec{d}_A^T&\varvec{d}_B^T \end{array}\right] \) the gap vector
can be introduced. We assume that a single pointwise contact interaction occurs locally between the surfaces \(\varGamma _A\) and \(\varGamma _B\).
In the original mastermaster scheme, there is no preselection of a material point where contact takes place within the surfaces. Instead a Local Contact Problem (LCP) is defined to find the material points in both surfaces where contact occurs. Initially it is considered that both surfaces are locally smooth. The LCP is defined for a fixed set of DOF \(\bar{\varvec{d}}\). Then, one has to find the set of convective coordinates \(\bar{\varvec{c}}=\left[ \begin{array}{cccc} \bar{\zeta }_A&\bar{\theta }_A&\bar{\zeta }_B&\bar{\theta }_B \end{array}\right] ^T\) such that the conditions
are valid. Here \(\varGamma _A,_{\zeta _A}\), \(\varGamma _A,_{\theta _A}\), \(\varGamma _B,_{\zeta _B}\), \(\varGamma _B,_{\theta _B}\) are tangent vectors of the surfaces. The conditions in (30) are interpreted as orthogonality relations, meaning that the gap vector is orthogonal to both surfaces, see Fig. 3.
The solution of the LCP yields the point of contact interaction between the surfaces. For the definition of interface forces at the contact point it is useful to define the contact normal \(\varvec{n}\) as a quantity which depends on the gap vector \(\varvec{g}\)
So far the strategy is restricted to smooth surfaces because (30) uses information from the tangent directions of the surfaces. In case of singularities (such as the edges and vertices of polyhedra) one cannot define tangent directions uniquely. Instead, it is necessary to employ the sub derivative concept.
A strategy to handle singularities using the LCP degeneration was proposed in [32] and [33]. The basic idea is to alleviate selected orthogonality relations from (30). In a systematic way [32] defined a degenerative operator \(\varvec{P}_s\)
where \(\varvec{c}_s\) is the vector with degenerated parameters, which dimension is \(s \in {\mathbb {N}}  1\le s \le 4\). With adequate choices of the degenerative operator, the requirement of orthogonality in selected directions can be alleviated, thus creating an adequate geometrical treatment for contacts involving surfaces with singularities, such as the faces of a polyhedron. The LCP can be rewritten after its degeneration as, see details in [32],
LCP degenerations
Discrete element models with polyhedra have complex contact interactions. These are vertextoface, edgetoface, facetoface, edgetoedge, vertextoedge or vertextovertex interactions. The special case of a facetoface contact interaction deserves more discussion. As the faces are planar by definition, the only possibility for a facetoface interaction needs parallel faces with anti parallel external normal directions \(\varvec{n}_A\) and \(\varvec{n}_B\), see Fig. 4a. This would yield a continuous contact in the intersecting areas, see Fig. 4b. The new idea is, to approximate this continuous contact by a set of pointwise contact interactions involving the singularities of the faces, as shown in Fig. 4c for the case of edgetoedge and vertextoface interactions. In the following we will use the representation depicted in Fig. 4c. A similar strategy is also applied for the edgetoface interaction, not illustrated here, but with an equivalent pointwise contact force representation, involving vertextoface or edgetoedge interactions (see example 1 in Sect. 3.1).
This special treatment reflects the fact that a perfect parallel approaching of faces is not only improbable, but any disturbance on parallelism actually would lead to a description of pointwise contact(s) involving neighboring singularities (edges or vertices), as shown in Fig. 4c. Therefore, in the DEM context this seems to be the best choice.
Next the different kinds of degeneration are discussed. The analytical solution of each LCP degeneration can be found in the “Appendix”.
In this case, the material point assumed as contactcandidate (a vertex) is known in one surface. As an example: if the vertex is \(\varvec{x}_1^r\) in \(\varGamma _A\) in the reference configuration, the parameters \(\zeta _A=1\) and \(\theta _A=1\) are already known and fixed. Contrary, in surface \(\varGamma _B\), no a priori values of parameters are known. This solution can be systematically treated by the mastermaster formulation with the special degenerative operator
The geometric interpretation of this degenerated LCP is the solution of the orthogonal projection of a vertex onto a face. In this case, the contact normal direction is parallel to the external normal direction of the face.
In this case no material points are assumed as contact candidate a priori. However, the edges are associated with particular values of the parameters. For example, in a surface \(\varGamma _A\) the edge connecting vertices \(\varvec{x}_1^r\) and \(\varvec{x}_2^r\) can be considered in reference configuration. With that \(\theta _A=1\) is known a priori. In surface \(\varGamma _B\), considering also the edge connecting the corresponding vertices \(\varvec{x}_1^r\) and \(\varvec{x}_2^r\) leads to the already known value of \(\theta _B=1\). Based on that knowledge the LCP can be solved by employing the degenerative operator
The geometric interpretation refers to the solution of the minimum distance between two lines in space. The contact normal direction is not related to any face external normal in this case. Instead, it is determined by the LCP solution, disregarding some orthogonality relations from the original LCP in (30) by using the proper degenerative operator defined in (35).
In this case one vertex is a material point candidate for contact in a surface, while in the edge it is already chosen. As an example, if the vertex under analysis is in \(\varGamma _A\) and its coordinate is given by \(\varvec{x}_1^r\) in the reference configuration, the parameters \(\zeta _A=1\) and \(\theta _A=1\) are known and fixed. If the edge of interest is the one connecting \(\varvec{x}_1^r\) and \(\varvec{x}_2^r\) in the reference configuration on surface \(\varGamma _B\) this leads to the known value of \(\theta _B=1\). Therefore, the LCP is a singlevariable problem, in which the only parameter still not chosen is \(\zeta _B\). The treatment by mastermaster contact formulation is systematized in this case using the following degenerative operator:
The geometric interpretation of this case is the orthogonal projection of a point (the vertex) onto a line in space (containing the edge). As in previous case, the contact normal direction is not related to the external normals of the faces. It is determined by the LCP solution using the proper degenerative operator defined in (36).
In this case both vertices are already taken as the material points candidate to contact. Hence the vertex \(\varvec{x}_1^r\) in \(\varGamma _A\) relates to the parameters \(\zeta _A=1\) and \(\theta _A=1\) which are known. Analogously, for the vertex in \(\varGamma _B\) denoted by \(\varvec{x}_1^r\) in the reference configuration, the parameters \(\zeta _B=1\) and \(\theta _B=1\) are known. Hence no LCP has to be solved for this pointtopoint contact (similar to nodenode FEM descriptions). The contact normal direction follows directly from the direction of the vector connecting both vertices.
Remark
The discussed examples of degenerations are given for particular cases of vertices (e.g.: \(\varvec{x}_1^r\)) and edges of surfaces (e.g.: connecting the points \(\varvec{x}_1^r\) and \(\varvec{x}_2^r\)). However, all vertices and edges may be described by such particular choices, just changing the numbering sequence for the selected vertices within a given parameterization.
Surface splitting in subregions
To make practical use of the results in the Sect. 2.4.2, a strategy can be established in which the external surface of each body is split into subregions. Each of these subregions will be associated with a local parameterization \(\varGamma _A\) in body A and \(\varGamma _B\) in body B. After this split, pairs of \(\varGamma _A\) and \(\varGamma _B\) are investigated, in order to address possible contacts. The subregions are denoted by \(A_i\) and are part of surface of the body A, with \(1\le i \le N_A\), where \(N_A\) is the number of subregions in body A. Analogously we define the subregions \(B_j\) for the body B, with \(1\le j \le N_B\), where \(N_B\) is the number of subregions in body B.
The idea is illustrated in Fig. 5, where two polygons are divided in subregions. The left one has subregions \(A_1\) to \(A_5\) associated with edges and \(A_6\) to \(A_{10}\) associated with vertices. The right one has subregions \(B_1\) to \(B_4\) associated with edges and \(B_5\) to \(B_{8}\) associated with vertices.
In the context of DEM with polyhedric bodies, the subregions are: (i) faces; (ii) edges and (iii) vertices. These have to be defined for each polyhedron. The contact check involves investigation between all subregions of a polyhedron against all subregions of another one. This strategy is quite general and works for arbitrarily shaped polyhedra, possibly nonconvex. However, the associated computational cost is high because many subproblems have to be solved. Figure 6 depicts a nonconvex polyhedron, with subregions of faces, edges and vertices. Even with its nonconvexity, it can be addressed by the proposed strategy.
A subregion does not only consist of polyhedron faces, but also of edges and vertices. When evaluating the LCP for a pair of subregions the associated degeneration scheme has to be selected. Note however, not all pairs of subregions have a LCP to be solved. As already pointed out, pairs of facetoface and edgetoface are not considered.
Valid solutions of LCP
After obtaining the solution of a given LCP (see “Appendix”), one has to check if the set of parameters \(\bar{\varvec{c}}\) lies within the valid range of the parameterization of both surfaces \(\varGamma _A\) and \(\varGamma _B\) (see the valid parametric space in Fig. 1). For parameters outside the valid range a contact interaction is not further considered for this subregion.
As an example, consider the approach of a vertex in \(\varGamma _B\), named a subregion \(B_1\) towards the body A. Body A has subregions defined on its external surface. Some of them are depicted in Fig. 7. In this case, depending on the position of \(B_1\), its orthogonal projection onto the faces \(A_1\) and \(A_2\) may result in a solution which is outside the valid range. However, the projection of \(B_1\) onto the edge \(A_3\) leads to a valid contactcandidate, with its orthogonal projection falling in a valid range. This interaction describes a vertextoedge case.
Another example relates to the approach of two pyramids A and B shown in Fig. 8. In this particular case, the approach induces a contact interaction of their apexes. Depending on the trajectories of subregions \(A_1\) and \(B_1\), the vertextovertex case may lead to the only valid solution.
Testing the four possibilities of degenerations as presented in Sect. 2.4.2 has two drawbacks. The first one is related to the computational cost, because one needs to define subregions for all faces, edges and vertices of all polyhedra considered in the DEM model. This leads to a high number of geometric entities to be tested. The second drawback relates to the possibility of simultaneous validity of more than one LCP considered for a single pointwise solution. A simple example for such scenario is shown in Fig. 9 which exhibits a vertex \(B_1\), as a subregion of a body B approaching a concave region of a body A. The subregions of body A in this case are the faces \(A_1\) and \(A_2\) and the edge \(A_3\). The LCP, interpreted as the orthogonal projection of the vertex \(B_1\) onto the other geometric entity on A in this case yields valid parameters for all three tests. Therefore, in case of contact occurrence, more than one pointwise actionreaction check is needed for \(B_1\). This avoids penetration properly.
Remark
A contact hierarchy can be introduced to circumvent such difficulties. Strategy rules can be created to favor some contact interactions against others, depending on the expected geometric case. A simple (and preliminary) discussion of such idea is employed in the sixth numerical example, but the topic is worth of further discussions.
Even with the mentioned drawbacks we see our strategy as promising because it is able to detect any kind of contact interaction between convex and nonconvex polyhedra, including a natural description of multiple pointwise contacts in one polyhedron, which is essential for DEM to handle multiple interactions due to local concavities. In order to make this approach practical and feasible, a robust strategy for detecting possible contact pairs of subregions on bodies A and B are necessary. These ideas are expanded in Sect. 2.5.
Contact detection
Once the LCP is solved, one needs to check for contact and to enforce the associated contact constraint. Many approaches are available for the latter, such as the penalty method, Lagrange multiplier approaches, the augmented Lagrangian method, among others, see, e.g.:[36]. Usually one needs to detect the penetration between bodies in order to activate a contact constraint in the model. A scalar gap quantity with a sign is used for that, as in classical masterslave schemes.
In the mastermaster contact formulation, the kinematic quantity available to detect contact is the gap \(\varvec{g}\). Its norm can be evaluated and defines a scalar \(g_n=\Vert \varvec{g}\Vert \). This quantity is always positive and cannot solely be used to detect penetration between bodies, but defines only their proximity level. However a simple dot product test of the contact normal at configurations i and \(i+1\) can be used, as shown in [32]. This leads to the evaluation of \(\varvec{n}^{i} \cdot \varvec{n}^{i+1}\). If the result is negative, an inversion in the direction of the gap has taken place between configurations i and \(i+1\) which indicates a penetration.
This strategy has been used in [32] for contact detection when employing the mastermaster formulation. However, in the present context it is not sufficient. A contact pair could slide from one subregion pair to a neighbor subregion. In this case, the normals at time \(t_i\) and \(t_{i+1}\) belong to different subregions and thus the above evaluation does not make any sense. Similar problems occur for thin bodies or vertices at cone tips and other cases. An enormous complexity is related to a robust contact search in the context of general nonconvex polyhedra. Especially in DEM where thousands of different cases have to be resolved at every time step. To overcome these difficulties the barrier method is applied, which never permits penetration, but fulfils the contact constraints in a weak sense.
We present next the contact contributions to be included in weakform. The interface laws described in Sect. 2.4.7, provide details on the strategy for an implementation of the barrier method.
Contact contributions to the weakform
After establishing subregions of each body as introduced in Sect. 2.4.3 one can write
where \(\delta W_{ij}^{AB}\) are the contributions from each pair of subregions ij of a pair of bodies AB to the weak form.
Next, the terms \(\delta W_{ij}^{AB}\) are addressed based on the mastermaster contact formulation with its degenerations, including normal and frictional contributions. The presentation is short, more details can be found in [32] and [33].
The contact contribution \(\delta W_{ij}^{AB}\) can be split into
where \(\left( \delta W_{ij}^{AB}\right) ^{n}\) is the contribution of the elastic terms in normal direction, \(\left( \delta W_{ij}^{AB}\right) ^{d}\) is the contribution due to damping in normal direction and, finally, \(\left( \delta W_{ij}^{AB}\right) ^{f}\) represents the friction part. All terms are nonzero only if the contact pair AB with subregions i and j is active. Activation of a contact pair follows the discussion in Sect. 2.4.5, which will be revisited in Sect. 2.4.7.
Next, the contributions \(\left( \delta W_{ij}^{AB}\right) ^{n}\), \(\left( \delta W_{ij}^{AB}\right) ^{d}\) and \(\left( \delta W_{ij}^{AB}\right) ^{f}\) are discussed. They refer to the contact between bodies A and B with subregions i and j. For the ease of notation, these indices are omitted in the following and only used when necessary.
The elastic term related to the normal direction yields
where the elastic contact force is given by \(\varvec{f}_n=f_n\varvec{n}^{i+1}\) and the gap in normal direction by \(\delta g_n=\delta \Vert \varvec{g}^{i+1}\Vert = \delta \varvec{g}^{i+1} \cdot \varvec{n}^{i+1}\). The negative sign reflects the way of contact treatment with the barrier method. Note, the smaller \(g_n\), the larger is \(f_n\). Details are discussed in Sect. 2.4.7.
Damping in normal direction is given by
where the damping force is given by \(\varvec{f}_d\) at the contact point.
At this point it is convenient to introduce the relative velocity in normal direction for the contact pair at configuration \(i+1\), which is derived in [33] and leads to
This kinematical quantity has to be employed for the evaluation of \(\varvec{f}_d\) in Sect. 2.4.7. It is also possible to define a vector quantity for the relative velocity
Friction contributions are related to tangential kinematics of the mastermaster contact which was developed in [33]. The quantity \(\varvec{g}_t^{i+1}\), resembles the tangential gap at configuration \(i+1\). This term quantifies the sliding amount in tangential direction. It is updated when contact takes place between configurations i and \(i+1\). The increment of the tangential gap is given by
With this increment the accumulated tangential gap at configuration \(i+1\) can be computed
where \(\varvec{Q}_c^{\varDelta }\) is a rotation tensor to account for rigid body rotations experienced by the contact normal/tangent from configuration i to \(i+1\).
Note that in present contact formulation the interpretation of the tangential direction of a contact pair needs no definition of a tangential direction of contacting surfaces. Instead, it is defined by a projection to the tangential direction using the normal \(\varvec{n}^{i+1}\) in (43). Therefore, it can handle singularities such as edgetoedge contact or other particular cases in a natural and straightforward way, which is fundamental for the robust algorithmic treatment within DEM.
With that, the friction contribution to the weak form yields
where \(\varvec{f}_t\) is the friction force. The variation of the tangential gap at configuration \(i+1\) can be evaluated in a similar way as the timederivative of such quantity at configuration \(i+1\). Following [33] we obtain
The operator \(\varvec{D}\) can be found in [32] and denotes the relation between \(\delta \varvec{c}\) and \(\delta \varvec{d}\), which depends on the solved LCP. Therefore, contact degeneration plays a role within evaluation of \(\varvec{D}\), see [32],
The degenerative operator \(\varvec{P}_s\) has to be selected according to the degeneration, see Sect. 2.4.2, and the derivatives \(\varvec{r}_{4,\varvec{c}}\) and \(\varvec{r}_{4,\varvec{d}}\) are obtained differentiating Eq. (32).
Remark
The consistent linearization of the weak form involves relevant geometric terms. Indeed, one can evaluate \(\delta \varvec{g}^{i+1}\) consistently employing the operator \(\varvec{D}\) as follows:
With that, the LCP constraint information is embedded in the solution of the global model. This is essential for achieving quadratic convergence in the NewtonRaphson Method, as discussed in Sect. 2.6.
Interface law in normal direction
Contact constraints can be exactly enforced by techniques such as the Lagrange multiplier method, with the drawback of extra unknowns in the model, the Lagrangian parameters. As an alternative, a popular choice is the penalty method, which permits penetration in a controlled manner, ruled by a penalty stiffness parameter. In this case, no extra unknowns are introduced. Instead, numerical problems like bad conditioning of equations systems can occur, which are related to the contact stiffness. Both techniques need a computation of the penetration within each contact region.
As discussed in Sect. 2.4.5, the complexity to detect penetration is very high for polyhedral particles. The main difficulty stems from the need to check for pointwise contact occurrence between subregions. This local search needs to be performed for many pairs of subregions, leading generally to multiple interactions, possibly involving concave contact regions when evaluating the gap \(\varvec{g}\). A different solution to treat contact problems relies on a technique, the barrier method, that does not permit penetration. Instead, it detects contact based on proximity between bodies. This method is frequently employed by Computer Graphics community, see, e.g.: [52] and [53].
The basic idea of the barrier method is to enforce the contact constraint by introducing a gapdependent barrier function in the model potential. This function assumes very large values when two bodies (proximity) approach each other closely. The first variation of the barrier potential leads to a term in the weakform, seen in (39). It yields an expression for elastic force \(\varvec{f}_n=f_n\varvec{n}^{i+1}\) in normal direction. The magnitude of \(f_n\) follows from an evaluation of the barrier function, which is always active, or active only when two bodies come close. Similar treatments exist from the physical point of view when one is interested in very small geometric scales. In this context, in molecular dynamics forces stem from potentials, such as the LennardJohnes, potential, see, e.g.: [54]. The main advantage is that no penetration occurrence is needed to activate a contact constraint but only proximity of two or more bodies.
Based on such ideas we propose a compliance law for the contact behavior in normal direction. The law can translate a local physical behavior of contact (such as e.g. Hertz contact [55]) or can be interpreted as a numerical way to enforce contact constraints approximately—such as in a penaltybased approach.
The interface law will provide nonzero contact forces only within the proximity level \(g_n<\bar{g}_n\). With that, one avoids the possibility of spurious contact forces exerted between bodies located arbitrarily distant. The value \(\bar{g}_n\) is an activation threshold. Moreover, a very large force stems from the barrier method when \(g_n \rightarrow 0\). But we need to keep a physical/numerical control when the interface law is activated for not too close proximity levels. This can be achieved by a hybrid interface law composed of two parts: (i) a physically or numerically ruled part for not too close proximity values (from the activation threshold value until a certain fraction of it) and (ii) a barrierbased part when the bodies come very close, to avoid penetration
where \(\epsilon _1\) and \(n_1\) are parameters to establish the part (i) of the compliance law, \(\bar{\bar{g}}_n\) is a proximity threshold to change from part (i) to part (ii) of the compliance law. We define \(\bar{\bar{g}}_n = f {\bar{g}}_n\), where \(f \in {\mathbb {R}}  0<f<1\). The parameter \(n_2\) is associated with the intensity of the barrier function. The remaining parameters \(\epsilon _2\) and \(c_2\) are chosen to ensure the matching conditions
resulting in
Within the law in (50) we achieve a smooth transition between parts (i) and (ii) of the interface law, which yields good numerical behavior. One can choose parameters for the part (i) to recover a Hertzian compliance law, which is the basis of many DEM implementations for spheres see, e.g. [44, 56, 57], and for super ellipsoids see, e.g. [20, 21]. One can also recover in part (i) a simple penaltybased linear law. Examples in Fig. 10 illustrate such interface laws.
When employing the proposed interface law, a threshold value \(\bar{g}_n\) for activation of the contact parameter has to be selected. This can be visualized as a thickness of a contact skin to be superimposed on all surfaces of the bodies. For smaller value of \(\bar{g}_n\) less alteration is introduced in the geometry for contact detection/evaluation. The value of \(\bar{g}_n\) is also linked to the desired contact stiffness (ruled by \(\epsilon _1\)). The choice of smaller \(\bar{g}_n\) is usually combined with larger \(\epsilon _1\). Otherwise, only the barrier part of the interface law will be experienced by the bodies, which does not take advantage of the hybrid aspect of the interface law. Thus, the choice of parameters has to be a compromise between the desired (usually small) \(\bar{g}_n\) and the contact stiffness. This is similar to the relation between contact stiffness and allowed penetrations, when referring to contact formulations that permit penetrations (for example the penalty method).
Figure 10(b) shows an example of interface law that is adopted in example 6, Sect. 3.6. In order to estimate the maximum timestep to that is allowed in an explicit integration scheme, one evaluates the Courant criterion. By representing the contact pair of two particles as an equivalent massspring oscillator, the tangent stiffness can be obtained as \(k_c=\frac{df_n}{dg_n}\) and an equivalent mass is estimated by \(m_c\). A rough estimate for the critical timestep is then given by \(\varDelta t_c = 2\sqrt{\frac{m_c}{k_c}}\). Figure 11 shows that this critical timestep achieves small time increments when a close gap is experienced during contact. In present work we employ an implicit timeintegration scheme, thus no critical timestep has to be respected and the computation can be run with much larger time steps, as demonstrated in the numerical examples.
When addressing contact with the barrier method, one cannot permit penetrations, which would not make physical sense. However, when solving the nonlinear system (1) by the NewtonRaphson Method, it is possible to find (wrong) solutions that include penetration between bodies. Then, one needs an efficient way to verify occurrence of penetration. This can be done at the end of each converged timestep, by executing the simple test \(\varvec{n}^{i} \cdot \varvec{n}^{i+1}\). The result has always to be positive, for all contact pairs that are present in the model. In case of a negative value, one has to discard such solution and to use a smaller timestep to find a valid one.
Damping model for normal direction
Damping in normal direction is classically related to the coefficient of restitution, which is a measure of the energy dissipated when contactimpact has taken place. Here we follow the alternative way, which describes damping as viscous dissipation. This is common in many DEM applications, see, e.g.: [56,57,58] and [21]. Here we use the model from [21], but do not assume Hertzian contact. We create a damping ratio input for the model, leading to a instantaneous viscous damping coefficient (dashpotmodel). Damping is evaluated in a consistent way with our proposition of interface law the elastic contact force in (49).
The viscous damping force is evaluated using
where \(\zeta \) is the desired damping ratio and \(\frac{d f_n}{d g_n}\) depends on the interface law, see (49). The desired level of damping is a choice to match a known experimental data, as in the case of the coefficient of restitution, when modeling contactimpact by distinct approaches. The negative sign inside the square root is inserted because the required derivative of the normal elastic force with respect to the proximity \(g_n\) is always negative. The parameters \(m_A\) and \(m_B\) are the masses of the contacting bodies.
The damping force \(\varvec{f}_d\) in (40) is given by
which ensures that the total normal contact force (\(\varvec{f}_n + \varvec{f}_d\)) is always compressive, as in [21]. The damping force in (52) represents a simple way to introduce a desired damping ratio \(\zeta \) for an equivalent linear springdashpot oscillator at each instant.
Note that a given input of \(\zeta \) does not necessarily represent the classical behavior of a massspringdashpot model (for example \(\zeta =1\) leading to a critical damping ratio). Indeed, the viscous damping can change when expressed by (53). Moreover, in the present context we may have multiple simultaneous contact forces between bodies. In this case, the employed equivalent mass evaluation could be improved, thus imagining that each contact pair would have its own damping related to the masses of the pair. We decided not to propose such a refinement in our damping model, to keep it simpler. However, this topic is worth an investigation, but out of the scope of present work. Thus, \(\zeta \) may be seen as an equivalent input parameter to reach a desired level of damping, which is defined similarly to a classical massspringdashpot model, but having distinct particularities.
Interface law in tangential direction
The classical Coulomb law is introduced to model friction in each contact point. The friction force follows from a rheological model for the tangential direction based on elastic and damping contributions. This comprises a model, already applied for spheretosphere contact in [57]. We employ enhancements as proposed in [33].
For a given contact pair the tangential gap and the relative velocity can be evaluated and used to compute a trial friction force composed of an elastic part \(\varvec{f}_t^e\) and a viscous part \(\varvec{f}_t^d\)
where \(\epsilon _t\) is a tangential penalty stiffness and \(c_t\) is a tangential viscous damping coefficient. The trial friction force is given by
Both, \(\varvec{f}_t^e\) and \(\varvec{f}_t^d\) lie in the tangent plane of contact (\(\varvec{I}\varvec{n}\otimes \varvec{n}\) ), defined by normal direction \(\varvec{n}\). However, forces \(\varvec{f}_t^e\) and \(\varvec{f}_t^d\) are not necessarily parallel.
The magnitude of the friction force is limited by Coulomb’s law. When the current contact status is sticking, no sliding occurs in next configuration if \(\Vert \varvec{f}_t^{tr} \Vert \le \mu _s f_n\), where \(\mu _s\) is the static friction coefficient. In this case, contact keeps in sticking status. Otherwise, a sticking to sliding transition takes place.
When the current contact status is sliding, sticking occurs in next configuration if \(\Vert \varvec{f}_t^{tr} \Vert \le \mu _d f_n\), where \(\mu _d\) is the dynamic friction coefficient. Then, a transition sliding to sticking takes place. Otherwise, sliding keeps active.
Next to the Coulomb inequality test the tangential force is updated. For a sticking contact status the update is \(\varvec{f}_t=\varvec{f}_t^{tr}\). When sliding occurs, the friction force is given by \(\varvec{f}_t=\mu _d f_n \varvec{t}^{i+1}\), where
is the sliding direction.
Finally, one has to update the tangential gap \(\varvec{g}_t^{i+1}\) for the next step when sliding occurs. This has to be done in such a way that this quantity holds the “recoverable part” of sliding tendency. Here we adopt the rheological model proposed in [33], which considers only the elastic part of friction for a possible update of \(\varvec{g}_t^{i+1}\). One has to check if \(\Vert \varvec{f}_t^{e} \Vert > \mu f_n\), where \(\mu =\mu _s\) if the contact status is sticking and \(\mu =\mu _d\), otherwise. If this inequality holds, sliding in the elastic part of friction is given by:
Then, the following updating formula follows:
If \(\Vert \varvec{f}_t^{e} \Vert \le \mu f_n\) no sliding takes place in the elastic part of friction model.
Note that the status of sliding or sticking for the contact is ruled solely by the inequality test employing \(\varvec{f}_t^{tr}\). Thus the composed friction model with elastic and damping parts has to be used. The proposed update for sliding in (57) considering only \(\varvec{f}_t^{e}\) exhibits some numerical advantages discussed in [33].
Global contactsearch
The computational bottleneck of numerical solutions using DEM is related to contact search and contact evaluation. The heavy computational cost stems from (2), which shows the double summation associated with all possible contact contributions. In our formulation the scenario is even more complex, because for each contribution \(\delta W_c^{AB}\) one needs to seek possible interactions between subregions of bodies, as expressed in (37). Therefore, the number of contact candidates is very high in practical problems involving general polyhedra.
To make the model computational feasible, the global contactsearch has to be enhanced, in order to avoid evaluations of contact candidates that are far away from each other. Different global search strategies were developed in the last years, involving, e.g. bounding volume (BV) overlapping search and combined Verlet and LinkedCell schemes.
Bounding volumes overlapping
A first level of BV is defined for bodies, considering only spheres as shown in Fig. 12a. This provides a quick search, for overlapping/proximity between such spheres and permits to eliminate nonstrong candidates. Our strategy here is to always construct inflated BV’s, ruled by an inflation factor. Then, in case of overlapping, proximity is detected. Otherwise, no contact is considered between the tested bodies.
Once overlapping is detected in the firstlevel BV search, a second level is entered, in which a BV is defined for each subregion of the that overlapped in the first BV search. In this context, we consider oriented and inflated BVs for each geometric entity: (i) faces have a triangular prismatic BV, (ii) edges have a cylindrical BV and vertices have spherical BV. An inflation factor is introduced for each defined BV. Again a search for overlap is started. In the positive case, a pair of subregions becomes a strong (probable) contactcandidate. Otherwise, no contact is considered. Examples of BVs for geometric entities are shown in Fig. 12b–d.
Only the strong contactcandidates are checked for contact using the methods in Sect. 2.4 which results in contributions to the weak form (38).
Combined Verlet and linkedcell schemes
Both levels of BV overlapping search are computationally costly . Particularly, the secondlevel (subregions) includes elaborated BV. Due to numerous expected tests in large scale practical problems, one has to improve the contactsearch even more.
Instead of checking all combinations of BV overlapping at each time increment, a list of probable contactcandidates can be established by a simple neighboring search. Only the candidates from such lists are worth of a closer investigation employing the inflated BV overlapping search. These lists are based on a Verlet scheme [54]. In the first level, they are generated considering the distance between pairs of body poles, as reference points, together with the radius of the inflated BV. In the second level, the centroid of each subregion BV is taken as a reference, together with a precomputed cutoff size. Proximity between subregion BV is checked, therefore establishing a Verlet list also for the second search level. This process saves substantial computational time when compared to an alltoall search.
A second scheme to save computational time while pursuing a global contact search is related to the establishment of a LinkedCell scheme. The basic idea stems from molecular dynamics applications (see, e.g.: [4]). Spatial cells are defined as regions in space. Each body is associated with a cell, according to its current location (in our case, a reference point of the body is considered). Then, an enhanced search is performed considering only neighboring cells as candidates for a body located in a given cell. This leads to another list for enhancing spatial search, not based in each particle neighborhood, but as a spatial search to organize particles in groups, based on their current locations and associating them to cells.
Both Verlet and LinkedCell schemes were implemented independently, and also in combined ways. The more efficient scheme is problem dependent. The reader may refer to [59] and [60] for some comparisons, as well as for some details on how to enhance the numerical implementation.
Global solution
A strategy for the overall model integration is derived on the basis of (1). We consider a system with N bodies. Each of the bodies has reference point (pole) P, with which six DOF are associated (incremental displacements \(\varvec{u}^{\varDelta }_P\) and incremental rotations \(\varvec{\alpha }^{\varDelta }_P\).
To begin the timeintegration, we have to assume an initial status for all possible contact interactions. No overlaps between bodies are considered in the initial (reference) configuration. This is essential for the usage of the contact normal interface law and the barrier method which cannot handle overlaps but proximity between contact surfaces.
We start the simulation with an initial global contact search as discussed in Sect. 2.5. Considering only the strong contactcandidates, the LCP and contact contributions are evaluated. When \(g_n<\bar{g}_n\), according to the normal interface law expressed in (49), an active contact pair is established. It has nonzero contributions to the weak form. All the nonzero contributions of (38) are inserted into (37) and thus all contact contributions are included in (1).
Set of equations for the global model
To perform the timeintegration, we have to write (1) for a given time instant. Following the scheme of section 2.2 the still unknown (next) configuration \(i+1\) has to be considered. Thus, incremental DOFs appear together with current (known) values at configuration i. A 6Ndimension global vector of DOFs \(\varvec{a}\) is introduced
where \(\varvec{u}_B^{\varDelta }\) and \(\varvec{\alpha }_B^{\varDelta }\) (\(B=1,...,N\)) are the incremental DOFs describing the motion of each rigid body. Analogously, it is possible to write a similar vector for the virtual quantities
where \(\delta \varvec{u}_B^{\varDelta }\) and \(\delta \varvec{\alpha }_B^{\varDelta }\) are virtual quantities associated with each DOF.
Now the weak form (1) can be written as
where the 6Ndimension vector of residuals \(\varvec{r}\) was introduced
where each term \(\varvec{r}_f^B\) and \(\varvec{r}_{\mu }^B\) (\(B=1,...,N\)) represents a contribution given by
The inertial contributions \(\varvec{f}_t^B\) and \(\varvec{\mu }_t^B\) stem from (18) and the external loads contributions \(\varvec{f}_e^B\) and \(\varvec{\mu }_e^B\) follow from (24). The terms \(\varvec{f}_c^B\) and \(\varvec{\mu }_c^B\) represent contact contributions, stemming from each contact pair between subregions related to (38).
The weak form (61) is valid for arbitrary \(\delta \varvec{a}\). This leads to a nonlinear system of 6N equations, with 6N unknowns. The solution is obtained by applying the NewtonRaphson Method. For that, one needs to obtain the consistent linearization of the residual (62), with respect to the unknowns. The solution of the nonlinear system of equations completes a single timestep evaluation within the Newmark method in Sect. 2.2.
Consistent Linearization
This process usually leads to cumbersome algebraic work. Alternatively (as we did here), one can employ automatic differentiation techniques, leading to automatic code generation for such complex expressions. Here the tool AceGen is used, see [61]. For the automatic differentiation, one needs to use a symbolic computational framework to define all the expressions necessary to evaluate the residual (62). All expressions must be defined as a function of the model unknowns (in our case, the displacements and rotations). With that, the AceGen tool is able to automatically evaluate the partial derivatives of all quantities with respect to the model unknowns (DOF), and perform their consistent linearization. The total consistent linearization is composed of contributions stemming from each particle, which are independent of each other, and of contact terms, which couple pairs of particles. Our strategy was to separate such effects both for residual evaluation and also for its consistent linearization. For each particle, one has to write the contributions \(\varvec{f}_t^B\), \(\varvec{f}_e^B\), \(\varvec{\mu }_t^B\), and \(\varvec{\mu }_e^B\). In such quantities, the timederivative quantities have to be written following the Newmark adopted scheme, yielding only expressions dependent on the model DOF. Programming these expressions in the AceGen tool, one can evaluate the necessary linearizations. The contribution of each particle is included on a global residual and global tangent by a procedure similar to done in the FEM, when including local influences of an element into the global system of equations.
The consistent linearizations of contact terms follow a similar procedure. One has to write \(\varvec{f}_c^B\) and \(\varvec{\mu }_c^B\) as a function of DOF and use the same tool to perform the consistent linearization. After, the contact contributions are included on a global residual and global tangent. As each contact contribution involves DOF of two particles, such contributions are the responsible for the overall system coupling between particles. There are efficient ways of programming the AceGen tool to provide directly the residual and its consistent linearization, using potentials (when available) or pseudopotentials, instead of defining directly the residual and claiming its linearization. The reader can found more details of such techniques in [61].
Remark
The obtained system of nonlinear equations couples DOFs of different bodies due to contact contributions. As contact occurrences obey a physical neighborhood, the consistent linearization of \(\varvec{r}\), represented by a tangent stiffness matrix, has a sparse structure. This permits saving memory and solution time while solving each NewtonRaphson iteration, within a timestep. Hence solution techniques employed when solving nonlinear FEM models can be applied. Hence it was possible to implement the DEM in the Giraffe platform that was developed for FEM, see [62], also aiming for future coupling of FEM and DEM models.
Night owl contacts
In the contact detection we speculate that nonoverlapping BV at configuration i would not be responsible for active contact pairs at configuration \(i+1\). This has to be checked, however, by a second global search at the end of the timestep, after a converged solution has been achieved. All new overlapping BV found are tested to investigate if they represent active contacts, when \(g_n<\bar{g}_n\), according to the normal interface law expressed in (49). If no active new contact is found, the speculation has been correct and one can advance to the next timestep, already using the justobtained list of strong contactcandidates. Otherwise, one has to recompute the current timestep solution, considering additionally the new contacts in the model (we call these as “night owl contacts”).
Obviously, night owl contacts are undesirable, since they require time consuming reevaluation of the timestep. However, the speculation may be well assertive when considering large enough inflation factors for BV, which are related to the expected body velocity and assumed timesteps. Too large inflation factors are not efficient, since many strong contactcandidates are elected, leading to high computational cost for their evaluations with numerous zero contributions to the model. With that, one may expect an optimal computationalefficiency by choosing the smallest inflation factor, which leads to rare occurrence of night owl contacts. This may be estimated prior to simulation considering expected velocities to be experienced by the bodies which lead to distances that a body can travel within the next time step and employed to calculate the inflation of each BV. Similar ideas of speculation while looking for contact between bodies are found in [63] in the context of graphics computing.
Numerical examples
The discrete element model was implemented in the Giraffe platform, an inhouse C++ code, initially conceived for FEM models, but extended to encompass DEM implementations [62]. A new modulus for automatic contact search with the above discussed strategies was implemented. Specific codes for evaluating contributions to the global problem as in Sect. 2.6 were implemented in Mathematica\(^{\mathrm{TM}}\). AceGen [61] provided automatic differentiation, optimization and automatic code generation in C++ and then implemented in the Giraffe platform.
The considered examples rely on definitions of bodies and boundaries. The geometry description for the boundaries is similar to the bodies. Therefore, they are composed of triangular regions. The interaction bodytoboundary is treated the same way as in the case bodytobody.
Block falling on the ground
In this example a single cube is falling on the ground. No damping is considered in the impact occurrences, in order to prof conservation of mechanical energy.
The cube, has external normals of faces given in the local directions \(\pm \varvec{e}_1\), \(\pm \varvec{e}_3\) and \(\pm \varvec{e}_1 \times \varvec{e}_3\). Each cube face is composed of two triangular regions, for contact detection/treatment purposes, employing the surface parameterization proposed in (28). The ground is flat. All numerical data considered are shown in Table 1.
Distinct initial orientations are proposed for the block, leading to the cases:

(a)
The block is aligned such that its center of mass is positioned exactly above a given vertex. Successive impacts between this vertex and the ground occur and no rotation is induced to the block. Results are shown in Fig. 13.

(b)
The block is aligned such that its center of mass is positioned exactly above the center of a given edge. Successive impacts involving pointwise forces on the vertices of such edge occur, representing the linetoface interaction by two pointwise forces. No rotation is induced to the block. Results are shown in Fig. 14.

(c)
The block is aligned such that its center of mass is positioned exactly above the center of a cube face. Successive impacts involving pointwise forces on the vertices of that face occur, representing the facetoface interaction in a pointwise manner. No rotation is induced to the block. Results are shown in Fig. 15.

(d)
The block is aligned such that the contact force changes its angular momentum when it collides with the ground. Successive (bouncing) impacts occur involving distinct vertices and the ground, inducing a composition of translational and rotational movements to the block. Results are shown in Fig. 16.
For all cases one can observe conservation of mechanical energy, due to the absence of dissipative effects. To achieve that in simulations, an adequate timestep guideline (maximum) was chosen, enough to integrate the contactimpact interactions with reasonable precision. The contact parameter \(\bar{g}_n\) defines the socalled contact skin as an offset in the block. In this case, the chosen value represents \(1\%\) of the cube edge size.
Pile of blocks
In this example a set of blocks is analyzed, which fall in vertical direction under gravity load, forming a pile. A set of ten blocks is considered, each one modeled as in the previous example. The blocks are located such that they will touch on top of each other, after forming the pile. Their initial center of mass positions and orientations can be found in Table 2. Significant energy dissipation is introduced to obtain a final state that is static. The contact properties are depicted in Table 2. A proper timestep was considered, in order to handle correctly the contactimpact scenarios.
Figure 17 shows the final configuration of the formed pile. The red arrows demonstrate contact normal forces acting on the blocks. Their length is proportional to the force magnitude. Note that the facetoface interactions are handled by the model of equivalent pointwise forces on the edges as described in Sect. 2.4.2 and illustrated in Fig. 4. Figure 18 depicts the timeseries of the reaction between pile and ground in z direction, illustrating a series of impacts that the bottom block experiences. When the system finally comes to rest (static configuration) the magnitude of reaction in z direction equals the total weight of the set of blocks.
Sliding of a block
This example proposes a scenario to test the friction model. A block (the same cube as considered in example 1) rests initially on a flat surface. This configuration is shown in 19a, which exhibits the normal force components evenly distributed on the four vertices touching the surface. Table 3 summarizes the model characteristics, including the contact data with a high friction coefficient.
An external force in direction \(\varvec{i}\) is applied at the block, leading to a motion. The force magnitude is linearly ramped up in time (from 0 to 0.8 s), leading to force values from 0 to 16 N. We consider three distinct application points for the force which are described by an extra point (node) in the model, defined as rigidlyconnected to the block. This additional node increases the number of DOFs, but is constraint to the body motion. The associated equations are solved by the Lagrange multiplier method, see [39] and [43]. With this procedure it is possible to apply forces/moments at any point of the body and not only at the defined pole P. Based on this formulation the following cases are simulated:

(a)
Force applied at the bottom face centroid of the block: this case does not introduce a toppling tendency to the block. As long as the external force increases, the friction forces react, also increasing their magnitude. Friction is evenly distributed on the four vertices, see Fig. 19(b). This leads to the expected overall result for the total friction force applied on the block, as shown in Fig. 20. When the maximum tangential force (Coulomb limit) is reached, the friction force is constant and the block starts moving (dynamic friction takes place).

(b)
When the external force is applied at block center of mass toppling can occur. As long as the external force increases, the tangential forces due to stick increase in magnitude. Differently from the previous case, now the normal forces differ at the vertices, to fulfil static moment equilibrium. We define a leading edge of the block which is the base edge related to the positive direction of \(\varvec{i}\) and a trailing edge which is the opposite one. The leading edge vertices endure larger normal forces, while at the trailing edge the forces are smaller, see Fig. 19(c). Hence, the Coulomb limit for friction leads to different tangential reactions for vertices located on leading and trailing edges. When the maximum available friction is reached at the trailing edge, there is still stick friction at the leading edge, leading to a sudden friction force redistribution. However, as the block is still in stick mode, the trailing edge recovers its friction in next instants. This process leads to small fluctuations over time in the friction force. When both, trailing and leading edge vertices reach the static Coulomb friction limit, see Fig. 20, motion starts and the fluctuations no longer exist because the friction coefficient keeps the value of \(\mu _d\) for all pointwise interactions. Even with the toppling possibility, the available friction was not enough in this loading case to topple the block.

(c)
In the last case, the external force is applied at the block top face centroid. This case has a similar behavior as case (b), having again a tendency of toppling. However, when the force reaches a certain level toppling occurs, when the block is still sticking to the surface. Figure 19d shows the block configuration just when toppling starts (note the absence of contact forces at the trailing edge). During toppling, friction forces invert their direction and assume even zero values because the block looses contact with the ground for a while due to inertia effects at the end of the simulation.
The results in Fig. 20 demonstrate consistency between the simulated cases and the expected physical results. The repeatedly sudden drops of the friction level, followed by its recoveries in cases (b) and (c) are a result of the transition between stick and slip states. We did not assume a smoothing in the transition from \(\mu _s\) to \(\mu _d\). Hence when the friction force reaches the Coulomb limit, there is a nonsmooth hang in the friction coefficient dropping suddenly, independently of the occurrence of a motion. This behaviour can be improved in future works by introducing a smooth transition from \(\mu _s\) to \(\mu _d\) based on the relative velocity at the contact interface which is common in the context of FEM contact models. When solving such example considering the same value for the static and dynamic friction coefficients, such oscillations in friction force no longer appear, as depicted in Fig. 21.
Tetrapod body experiencing a twist
A body with tetrapod shape is an example of a nonconvex polyhedron. The data describing the body are summarized in Table 4. Furthermore, its stereolithography (stl) CAD file is provided as a supplementary material. We consider that the tetrapod initially rests on the ground under gravity loading, as depicted in Fig. 22. Due to the bodyground interaction vertical reaction forces are present in each touching point, having the same magnitude. These forces balance selfweight of the tetrapod. As an additional external load, we consider a moment applied at the body center of mass, trying to introduce a rotation in direction \(+\varvec{k}\) (twist). This external moment is aligned in direction \(+\varvec{k}\), and its magnitude is linearly ramped during the simulation duration of 2 s, in the range from 0 to 750 N.m.
Contact between the tetrapod body and the ground is distributed on 3 vertices, as shown in Fig. 22. When applying the external moment load, the tetrapod body keeps in static equilibrium, due to friction occurrence, as indicated in Fig. 22 by red arrows. As the external moment increases its magnitude, friction also increases, up to the Coulomb limit, when finally the system exhibits a transition from statics to dynamics, and the tetrapod body starts to twist.
Fig. 23 shows the reactive moment induced by friction forces, which balances the external moment up to the instant approximately 1.6 s, when the Coulomb limit is achieved. From this instant on, the induced reactive moment keeps constant. Note that in this case we did not consider a difference between static and dynamic friction coefficients, which would result in a drop of such reactive friction moment, when twisting starts.
Box of blocks
The interaction between several bodies are discussed in this example. First, we establish an initial positioning for a set of 564 identical particles. Each one has the same geometry as the cube proposed in the first example. The cubes are initially positioned without overlapping, having arbitrary orientations, as shown in Fig. 24a. We define four planar bounding surfaces, depicted also in Fig. 24 which form a box. The bottom of the box is a square with side length 1.0 m. Table 5 summarizes environment and solution data, such as contact data considered for bodybody and bodyboundary interactions.
From the initial state the particles move inside the box under gravity, forming the pack shown in Fig. 24b.
A second phase of the simulation follows by removing the lateral walls of the box inducing the pack to collapse. The particles are spread on the ground. Fig. 25 depicts a sequence of selected configurations assumed by the model as long as the time advances, until the final configuration has been achieved (static). Note that some blocks close to the middle of the pack still form a small pile at the end of the simulation which is related to friction.
We employed here an automatic timestepping, which adapts the timestep value according to difficulties for convergence encountered during the solution. This adaptive scheme is ruled by a guideline of timestep range.
Active contact pairs were monitored along the evolution of the simulation in time. Fig. 26 demonstrates for the second phase (pack collapse) of the simulation the number of active bodytobody and bodytoboundary contacts. All contacts are handled automatically by the implemented solver, considering the contact degenerations proposed in Sect. 2.4.2.
Box of tetrapodshape particles
This example considers the collapse of a pack of tetrapodshape particles (the same as presented in example 4). Handling particles of this shape within a DEM simulation is quite complex, since the tetrapod has thin tips and concave regions. Thin tips are difficult to deal with in contact algorithms that are based on the computation of penetrations, because spurious contact detection can occur. Our scheme takes advantage of the barrierbased normal interface law, making this treatment much simpler and leads to a proper scheme for handling contact of thin tips. Moreover, concavities are challenging due to difficulties to deal with multiple pointwise contacts, as discussed in the scheme shown in Fig. 9. Therefore, we see the tetrapodshape particles as an interesting test for the new approach to the treatment of contact discussed in this paper.
To establish a initial pack of tetrapod bodies, the following steps are performed

(a)
establishment of an initial configuration with a set of 1005 bodies with no overlapping and assuming arbitrary orientation, similar to example 5, see Fig. 24a;

(b)
dropping of all bodies in the box, also similar to the procedure shown in Fig 24b. This process leads to a first pack of bodies. However, in this case we would like to obtain a more compact packing, thus additional steps are followed;

(c)
moving of lateral walls of the box towards the center of the pack, thus forcing the bodies to adjust their position and increasing the height of the pack. The final shape of the box bottom is a square with side length 7.0 m;

(d)
turning off the friction forces, as an artificial feature for compaction. This forces the particles to reorganize, filling many voids, due to absence of friction;

(e)
turning on the friction force again and letting the pack come to rest as shown in Fig. 27a. This represents the initial configuration for the simulation of the pack collapse.
For the evaluation of the pack collapse, we removed suddenly the lateral walls of the filled box shown in Fig. 27(a). We performed a simulation of 7.0 s duration considering only gravitational field as source of external loads (in direction \(\varvec{k}\)) with magnitude 9.81 m/\(s^{2}\). The boundary is defined only by the ground, considered as a flat surface. The contact model for all interactions has the same properties considered in example 4, as shown in Table 4.
Figure 27b depicts an intermediate configuration after 3.0 s of the simulation and Fig. 27c shows the final configuration obtained, after 7.0 s. The same final configuration may be seen in a lateral view in Fig. 28, which shows the interesting characteristic of interaction of tetrapods. Many interlockings occur due to their particular shape. Figure 29 depicts the number of active contact pairs along time, both considering bodytobody and bodytoboundary interactions.
In examples 5 and 6, we used an automatic timestepping solver to guide the solution. The solver adjusts automatically the timestep size according to success or not while trying to find NewtonRaphson convergence. When contactimpact takes place, convergence difficulties are faced and the solver has to decrease the timestep for successful integration. In systems with high dissipation level such as examples 5 and 6, one can expect an appropriate solution to be obtained with this timestepping strategy, even without a strict control of energy dissipation in each single impact occurrence (as we did in example 1). This approach is only possible when solving the dynamics equations with an implicit solver, which has no upperbound for the timestep. To set the solver timestep guidelines, however, one needs to use information based on the physics of the problem, as a reference for the timestep range.
Figure 30 shows time evolution of the kinetic energy of the system for example 6, demonstrating the expected result during the pack collapse: an initial increase of the system kinetic energy (during the pack dismounting), followed by the system kinetic energy drop to zero, as long as the particles find their new static position on the ground or above other particles, forming a pile.
Figure 31 shows the timesteps employed during the implicit solution. One can see clearly that smaller timesteps were required only between 1.0s and 2.0 s, which coincides with the peak in the kinetic energy. During this simulation contact interactions experienced a very small gap, achieving \(0.001 \bar{g}_n\) or even smaller values, leading to a very high contact stiffness in (49). An explicit integration scheme would have needed a time step smaller than \(\varDelta t_{c}=10^{5}\) s, as can be seen in Fig. 11. Therefore, using data from Fig. 31, the average timestep of \(\varDelta t= 8.2 \cdot 10^{4}\) s is 82 times higher than \(\varDelta t_{c}\). With that, even with the higher computational cost of each timestep integration of an implicit scheme, the practical used timestep value leads to a brake even in computing time and the implicit scheme has advantages of more control on energy and yields very large time steps when the system is close to a static response. As a reference of computational solving time, the step (e) of this example (pack collapse) takes about 4 hours to complete in a CPU Intel Xeon W2135 3.7 GHz with 6cores.
Due to the complex shape of the tetrapods, one may face much more difficulties in this example for achieving convergence within each timestep. The main challenge was related to the description of the contacts, particularly in concave regions. Scenarios such as the one shown in Fig. 9 are frequently found and can lead to coexistence of multiple pointwise interactions in concave regions. Moreover, the search for equilibrium in NewtonRaphson iterations may lead to alternating patterns of switching on/off some contact pairs, due to entering/exiting their range of validity of projection (LCP solution). We succeeded in circumventing such problem in the presented example by specific contact hierarchy rules, constructed to foresee some problematic scenarios. As an example, one may inactivate a vertextoedge contact in particular cases, when facing concave edges. When contact has already been established between a given vertex and both neighboring faces of a concave edge, one may inactivate a possible vertextoedge contact, such as in the example of Fig. 9. Similar ideas were also implemented for vertextovertex interactions. One may inactivate a vertextovertex contact when some of the attached faces or edges to a vertex have already presented an active contact region with the another vertex.
This kind of geometricbased observations, thus transformed into a set of rules to switch off automatically some detected contacts, improves substantially the convergence of the NewtonRaphson iteration within each timestep, because it avoids alternating patterns of switching on/off in contact regions. Here we see need for further research and improvements of the method, especially an indepth investigation of the above mentioned rules, which is a pure geometrical matter.
Conclusions
We propose in present work a discrete element method considering particles as general rigid polyhedra. Our main novelty is related to the contact treatment, where constraints are set up for multiple pointwise interactions between particles. We presented a strategy of splitting the external surface of each particle into a collection of subregions associated with geometric entities: vertices, edges and faces. Combinations of subregions form contact pairs. Each one is treated considering a distinct degeneration of a basic surfacetosurface formulation. The strategy is able to handle contact involving bodies with complex and general shapes, including thin tips and concave regions, as demonstrated in the numerical examples.
We propose a novel interface law for the normal contact treatment by a composition of a penaltybased and a barrierbased law, with the advantage of avoiding penetration between particles, but instead allowing a controlled level of proximity. For frictional cases, we employed a Coulombbased treatment, considering distinct static/dynamic friction coefficients.
The proposed formulation was implemented considering an automatic contact detection scheme based on inflated bounding volumes overlapping, associated with each considered geometric entity. Each particle has a bounding volume and a set of subboundingvolumes, which are assumed to be contactcandidates. Only strongcandidates are investigated by the introduced Local Contact Problem. This leads to a computationally feasible scheme, even when numerous necessary contact and proximity tests are involved.
We expect high potential for application of the present method to practical engineering problems, especially, in which the particle shape is important.
References
 1.
Cundall PA, Strack ODL (1979) A discrete numerical model for granular assemblies. Géotechnique 29(1):47–65
 2.
Taghizadeh K, Combe G, Luding S (2017) ALERT Doctoral School 2017 Discrete Element Modeling. The Alliance of Laboratories in Europe for Education, Research and Technology. ALER Geomaterials, France
 3.
Pöschel Thorsten, Schwager Thomas (2005) Computational Granular Dynamics. Springer, Berlin
 4.
Griebel M, Knapek S, Zumbusch G (2007) Numerical Simulation in Molecular Dynamics. Numerics, Algorithms, Parallelization, Applications. Springer, Berlin Heidelberg
 5.
Cho GyeChun, Dodds Jake, Carlos Santamarina J (2006) Particle shape effects on packing density, stiffness, and strength: natural and crushed sands. J Geotech Geoenviron Eng 132(5):591–602
 6.
Höhner D, Wirtz S, KruggelEmden H, Scherer V (2011) Comparison of the multisphere and polyhedral approach to simulate nonspherical particles within the discrete element method: Influence on temporal force evolution for multiple contacts. Powder Technol 208(3):643–656
 7.
Kačianauskas Rimantas, Tumonis Liudas, Džiugys Algis (2014) Simulation of the normal impact of randomly shaped quasispherical particles. Granular Matter 16(3):339–347
 8.
Zhao Bo, An Xizhong, Zhao Haiyang, Gou Dazhao, Shen Lingling, Sun Xudong (2020) DEM simulation on random packings of binary tetrahedronsphere mixtures. Powder Technol 361:160–170
 9.
Irazábal Joaquín, Salazar Fernando, Santasusana Miquel, Oñate Eugenio (2019) Effect of the integration scheme on the rotation of nonspherical particles with the discrete element method. Comput Particle Mech 6(4):545–559
 10.
Smeets Bart, Odenthal Tim, Keresztes Janos, Vanmaercke Simon, Van Liedekerke Paul, Tijskens Engelbert, Saeys Wouter, Van Oosterwyck Hans, Ramon Herman (2014) Modeling contact interactions between triangulated rounded bodies for the discrete element method. Comp Methods Appl Mech Eng 277(2014):219–238
 11.
Lim Keng Wit, Krabbenhoft Kristian, Andrade José E (2014) On the contact treatment of nonconvex particles in the granular element method. Comput Particle Mech 1(3):257–275
 12.
Govender Nicolin, Wilke Daniel N, Kok Schalk, Els Rosanne (2014) Development of a convex polyhedral discrete element simulation framework for NVIDIA Kepler based GPUs. J Comput Appl Math 270:386–400
 13.
Govender Nicolin, Wilke Daniel N, Pizette Patrick, Abriak Nor Edine (2018) A study of shape nonuniformity and polydispersity in hopper discharge of spherical and polyhedral particle systems using the BlazeDEM GPU code. Appl Math Comput 319:318–336
 14.
Nassauer Benjamin, Liedke Thomas, Kuna Meinhard (2013) Polyhedral particles for the discrete element method: Geometry representation, contact detection and particle generation. Granular Matter 15(1):85–93
 15.
Smeets Bart, Odenthal Tim, Vanmaercke Simon, Ramon Herman (2015) Polygonbased contact description for modeling arbitrary polyhedra in the Discrete Element Method. Comp Methods Appl Mech Eng 290:277–289
 16.
Zheng Fei, Jiao Yu Yong, Sitar Nicholas (2018) Generalized contact model for polyhedra in threedimensional discontinuous deformation analysis. Int J Numer Anal Methods Geomech 42(13):1471–1492
 17.
Jean M (1999) The nonsmooth contact dynamics method. Comp Methods Appl Mech Eng 177(3–4):235–257
 18.
Moreau JJ (1999) Numerical aspects of the sweeping process. Comp Methods Appl Mech Eng 177(3–4):329–349
 19.
Dubois Frédéric, Acary Vincent, Jean Michel (2018) La méthode de la dynamique des contacts, histoire d’une mécanique non régulière. Comptes Rendus Mecanique 346(3):247–262
 20.
Wellmann Christian, Lillie Claudia, Wriggers Peter (2008) A contact detection algorithm for superellipsoids based on the commonnormal concept. Eng Comput (Swansea, Wales) 25(5):432–442
 21.
Wellmann Christian, Wriggers Peter (2012) A twoscale model of granular materials. Comp Methods Appl Mech Eng 205–208(1):46–58
 22.
Zhao Yongzhi, Lei Xu, Umbanhowar Paul B, Lueptow Richard M (2019) Discrete element simulation of cylindrical particles using superellipsoids. Particuology 46:55–66
 23.
Andrade José E, Lim Keng Wit, Avila Carlos F, Vlahinić Ivan (2012) Granular element method for computational particle mechanics. Comput Methods Appl Mech Eng 241–244:262–274
 24.
Kawamoto Reid, Andò Edward, Viggiani Gioacchino, Andrade José E (2016) Level set discrete element method for threedimensional computations with triaxial case study. J Mech Phys Solids 91:1–13
 25.
Kawamoto Reid, Andò Edward, Viggiani Gioacchino, Andrade José E (2018) All you need is shape: predicting shear banding in sand with LSDEM. J Mech Phys Solids 111:375–392
 26.
Suhr Bettina, Six Klaus (2017) Parametrisation of a DEM model for railway ballast under different load cases. Granular Matter 19(4):1–16
 27.
Bian Xuecheng, Wei Li Yu, Qian, and Erol Tutumluer. (2019) Micromechanical Particle Interactions in Railway Ballast through DEM Simulations of Direct Shear Tests. Int J Geomech 19(5):04019031
 28.
Liu Yangzepeng, Gao Rui, Chen Jing (2019) Exploring the influence of sphericity on the mechanical behaviors of ballast particles subjected to direct shear. Granular Matter 21(4):1–17
 29.
Hoang Thi Minh Phuong, Alart Pierre, Dureisseix David, Saussine Gilles (2011) A domain decomposition method for granular dynamics using discrete elements and application to railway ballast. Ann Solid Struct Mech 2(2–4):87–98
 30.
Zhou Yu, Wang Huabin, Zhou Bo, Li Jianmei (2018) DEMaided direct shear testing of granular sands incorporating realistic particle shape. Granular Matter 20(3):1–12
 31.
Höhner D, Wirtz S, Scherer V (2015) A study on the influence of particle shape on the mechanical interactions of granular media in a hopper using the Discrete Element Method. Powder Technol 278:286–305
 32.
Gay Neto Alfredo, Wriggers Peter (2019) Computing pointwise contact between bodies: a class of formulations based on mastermaster approach. Comput Mech 64(3):585–609
 33.
Gay Neto Alfredo, Wriggers Peter (2020) Mastermaster frictional contact and applications for beamshell interaction. Comput Mech 66(6):1213–1235
 34.
Shabana Ahmed A (2013) Dynamics of Multibody Systems, 4th edn. Cambridge University Press, NY
 35.
Wood Javier Bonet Richard D (2008) Nonlinear Continuum Mechanics for Finite Element Analysis. Cambridge University Press, NY
 36.
Wriggers P (2006) Computational contact mechanics. Springer, Berlin
 37.
Pimenta PM, Campello EMB, Wriggers P (2008) An exact conserving algorithm for nonlinear dynamics with rotational DOFs and general hyperelasticity. Part 1: Rods. Comput Mech 42(5):715–732
 38.
Gay Neto Alfredo (2016) Dynamics of offshore risers using a geometricallyexact beam model with hydrodynamic loads and contact with the seabed. Eng Struct 125:438–454
 39.
Gay Neto Alfredo (2017) Simulation of mechanisms modeled by geometricallyexact beams using Rodrigues rotation parameters. Comput Mech 59(3):459–481
 40.
Pimenta PM, Campello EMB, Wriggers P (2004) A fully nonlinear multiparameter shell model with thickness variation and a triangular shell finite element. Comput Mech 34(3):181–193
 41.
Campello EMB, Pimenta PM, Wriggers P (2011) An exact conserving algorithm for nonlinear dynamics with rotational DOFs and general hyperelasticity. Part 2: Shells. Comput Mech 48(2):195–211
 42.
Ota NSN, Wilson L, Gay Neto A, Pellegrino S, Pimenta PM (2016) Nonlinear dynamic analysis of creased shells. Finite Elements Anal Des 121:64–74
 43.
Refachinho de Campos Paulo R, Gay Neto Alfredo (2018) Rigid body formulation in a finite element context with contact interaction. Comput Mech 62(6):1369–1398
 44.
Campello Eduardo MB (2015) A description of rotations for DEM models of particle systems. Comput Particle Mech 2(2):109–125
 45.
Gay Neto Alfredo, de Mattos Pimenta Paulo, Wriggers Peter (2018) Contact between spheres and general surfaces. Comp Methods Appl Mech Eng 328:686–716
 46.
Pimenta PM, Campello EMB (2001) Geometrically nonlinear analysis of thinwalled space frames. In: Proceedings of the Second European Conference on Computational Mechanics, II ECCM, Cracow, Poland
 47.
Wriggers P (2008) Nonlinear finite element methods. Springer, Berlin
 48.
Ibrahimbegović Adnan, Mikdad Mazen Al (1998) Finite rotations in dynamics of beams and implicit timestepping schemes. Int J Numer Methods Eng 41(5):781–814
 49.
Ibrahimbegović Adnan, Mamouri Saïd (2000) On rigid components and joint constraints in nonlinear dynamics of flexible multibody systems employing 3D geometrically exact beam model. Comp Methods Appl Mech Eng 188(4):805–831
 50.
Gay Neto Alfredo, Pimenta Paulo M, Wriggers Peter (2016) A mastersurface to mastersurface formulation for beam to beam contact. Part I: frictionless interaction. Comp Methods Appl Mech Eng 303:400–429
 51.
Gay Neto Alfredo, Pimenta Paulo M, Wriggers Peter (2017) A mastersurface to mastersurface formulation for beam to beam contact. Part II: frictional interaction. Comp Methods Appl Mech Eng 319:146–174
 52.
Harmon David, Vouga Etienne, Smith Breannan, Tamstorf Rasmus, Grinspun Eitan (2009) Asynchronous contact mechanics. SIGGRAPH ’09 (ACM Transactions on Graphics)
 53.
Li Minchen, Ac Hary Ferguson Z, Schneider Teseo, Langlois Timothy, Zorin Denis, Panozzo Daniele, Jiang Chenfanfu, Kaufman Danny M (2020) Incremental Potential Contact: Intersection And Inversionfree, LargeDeformation Dynamics. ACM Trans Graph, 39(4):49. https://doi.org/10.1145/3386569.3392425
 54.
Verlet Loup (1967) Computer experiments on classical fluids. I. thermodynamical properties of lennardjones molecules. J Phys D Appl Phys 159(1):98–103
 55.
Johnson KL (1987) Johnson. Cambridge University Press, NY
 56.
Bandeira Alex Alves, Zohdi Tarek Ismail (2019) 3D numerical simulations of granular materials using DEM models considering rolling phenomena. Comput Particle Mech 6(1):97–131
 57.
Luding Stefan (2008) Introduction to discrete element methods: basic of contact force models and how to perform the micromacro transition to continuum theory. Eur J Environ Civil Eng 12(7–8):785–826
 58.
Campello EMB (2016) Um modelo computacional para o estudo de materiais granulares. Habilitation Thesis. University of Sao Paulo, Brazil (in Portuguese)
 59.
Yao Zhenhua, Wang Jian Sheng, Liu Gui Rong, Cheng Min (2004) Improved neighbor list algorithm in molecular simulations using cell decomposition and data sorting method. Comput Phys Commun 161(1–2):27–35
 60.
Li Wan Qing, Ying Tang, Jian Wan, Yu Dong Jin (2010) Comparison research on the neighbor list algorithms: Verlet table and linkedcell. Comp Phys Commun 181(10):1682–1686
 61.
Korelc J, Wriggers P (2016) Automation of Finite Element Methods. Springer International Publishing, Switzerland
 62.
Gay Neto Alfredo (2020) Generic Interface Readily Accessible for Finite Elements (GIRAFFE). User’s Manual. Available at: sites.poli.usp.br/p/alfredo.gay/giraffe.html
 63.
Ainsley Samantha, Vouga Etienne, Grinspun Eitan, Tamstorf Rasmus (2012) Speculative parallel asynchronous contact mechanics. ACM Trans Graph 31(6:151. https://doi.org/10.1145/2366145.2366170
Acknowledgements
This study was financed by Alexander von Humboldt Foundation and in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil (CAPES)—Finance Code 001.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
For presentation of the analytical LCP solutions, we consider that the current absolute positions of triangular faces vertices are given by \(\varvec{x}_{1A}\), \(\varvec{x}_{2A}\), \(\varvec{x}_{3A}\) (face A) and \(\varvec{x}_{1B}\), \(\varvec{x}_{2B}\) and \(\varvec{x}_{3B}\) (face B).
Here we consider a vertex in the face A named \(\varvec{x}_{A}\), projected onto the face B. One may find an analytical solution by proposing for face B the alternative parameterization, fully compatible with (28):
where:
The associated set of orthogonality relations are
which permit the straightforward evaluation of the solution of the LCP given by \(\bar{\zeta }_B\) and \(\bar{\theta }_B\).
The same ideas are applied when projecting a vertex from face B onto a face A, just adapting the presented equations, accordingly.
Here we consider an edge in face A connecting the vertices \(\varvec{x}_{1A}\) and \(\varvec{x}_{2A}\) and an edge in face B connecting the vertices \(\varvec{x}_{1B}\) and \(\varvec{x}_{2B}\). One may propose the curve parameterizations \(\gamma _A\) and \(\gamma _B\) for such edges:
where:
Note that these parameterizations are compatible with such edges as part of surfaces described by (28).
The associated set of orthogonality relations are
which permit the straightforward evaluation of the solution of the LCP given by \(\bar{\zeta }_A\) and \(\bar{\zeta }_B\). One can employ this solution for any edge of face A and any edge on face B, just adapting a list with the sequence of vertices.
Vertextoedge
Here we consider a vertex in face A named \(\varvec{x}_{A}\), projected onto an edge in face B. One may find this analytical solution by proposing for face B the alternative parameterization for the edge \(\gamma _B\), as already pointed out in (67). In this case the only orthogonality relation to be fulfilled is:
The same ideas are applied when projecting a vertex from face B onto a an edge on face A, just adapting the presented equations, accordingly. Moreover, one may employ this solution for any edge of a face, just adapting a list with the sequence of vertices.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Neto, A.G., Wriggers, P. Discrete element model for general polyhedra. Comp. Part. Mech. (2021). https://doi.org/10.1007/s4057102100415z
Received:
Revised:
Accepted:
Published:
Keywords
 Discrete Element Method
 Mastermaster contact
 Polyhedra
 Barrier Method
 Nonconvex