# Optimal parameters for a hierarchical grid data structure for contact detection in arbitrarily polydisperse particle systems

- 987 Downloads
- 5 Citations

## Abstract

The objective of this paper is to find the optimum number of hierarchy levels and their cell sizes for contact detection algorithms based on a versatile hierarchical grid data structure, for polydisperse particle systems with arbitrary distribution of particle radii. These algorithms perform as fast as \(O(N)\) for \(N\) particles, but the prefactor can be as large as \(N\) for a given system, depending on the algorithm parameters chosen, making a recipe for choosing these parameters necessary. We estimate theoretically the calculation time of two distinct algorithms for particle systems with various packing fractions, where the sizes of the particles are modelled by an arbitrary probability density function. We suggest several methods for choosing the number of hierarchy levels and the respective cell sizes, based on truncated power-law radii distributions with different exponents and widths. The theoretical estimations are then compared with simulation results for particle systems with up to one million particles. The proposed recipe for selecting the optimal hierarchical grid parameters allows to find contacts in arbitrarily polydisperse particle systems as fast as the commonly-used linked-cell method in purely monodisperse particle systems, i.e., extra work is avoided in presence of polydispersity. Furthermore, the contact detection time per particle even decreases slightly with increasing polydispersity or decreasing particle packing fraction.

### Keywords

Contact detection Collision detection Hierachical grid Polydisperse Computational cost Discrete element## 1 Introduction

Collision detection is a basic computational problem arising in systems that involve spatial interactions among many objects such as particles, granules or atoms in many diverse fields such as robotics, computer graphics, physical simulations, cloth modelling, computational surgery, crowd simulations, etc. All these systems have rather short-ranged interaction in common. Particle based modelling techniques like the discrete element method, (event-driven) molecular dynamics, Monte-Carlo simulations and smoothed particle hydrodynamics, to name a few, play an important role for physically based simulations of powders, granular materials, fluids, colloids, polymers, liquid crystals, proteins and other materials. The performance of the computation relies on several factors, which include the physical model, on the one hand, and the contact detection algorithm used, on the other. The contact detection of pairwise interactions between particles can be one of the most time-consuming tasks in calculations when no suitable contact detection algorithm is used. Because the number of objects treated in simulations is often large, contact detection can become a computational bottleneck. For this reason, the development of efficient contact detection algorithms is crucial to the overall performance of simulations.

With the straightforward “all-to-all” approach each pair of particles is checked for collision. This requires \(O(N^2)\) collision checks for \(N\) particles, which is computationally prohibitively expensive. More efficient contact detection methods use a two-phase approach to reduce the computational costs: a broad phase and a narrow phase [16]. The broad phase determines pairs of objects that might possibly collide. It is frequently done by dividing space into regions and testing if objects are close to each other in space. Because objects can only intersect or interact if they occupy the same region of space, the number of pairwise tests can be reduced to \(O(N)\). The pairs that “survive” the broad phase test are passed to the narrow phase, which uses specialised techniques to test each candidate pair for a real contact [6, 17, 34]. The latter is trivial for spherical particles, where one has to compare the distance between particle centres with the sum of particle radii, but can be very costly for particles of arbitrary shape. For example, if there are \(S\) surface points per particle, a naive scheme may take order \(O(S^2)\) operations. More sophisticated schemes, such as the discrete function representation scheme, require on average \(O(S^{1/2})\) operations [34]. Since the broad phase basically acts as a filter for the narrow phase, choices for the two algorithms can usually be made independently.

We distinguish three types of broad phase contact detection methods/data structures: (i) based on coordinate sorting (or spatial sorting), e.g., sweep and prune, (ii) based on Delaunay triangulation, and (iii) based on spatial subdivision, e.g., (hierarchical) grids (or cell-based methods) and trees (e.g., Octrees in \(3D\) and Quadtrees in \(2D\)). Below we briefly describe the above methods and their advantages and weaknesses, while for the detailed analysis see Refs. [4, 5, 16, 18, 19, 20, 27] and references therein.

Contact detection algorithms based on coordinate sorting imply maintaining particles in a sorted structure along each axis [23, 28]. These methods are not sensitive to the particle sizes (i.e., radii for spherical particles) and consume \(O(N)\) memory; but they require sorting, which can range in effort from \(O(N)\) to \(O(N^2)\), depending on the sorting method used, volatility of the sorting lists over time and spatial distribution of objects.

The Delauney triangulation data structure consumes \(O(N)\) memory and it is not sensitive to the particle sizes when weighted triangulation is used [25]. However, it has the disadvantage that building (or re-building) the structure has a high computational cost, especially for moving particles. The use of flipping algorithms for maintaining and only incrementally updating the triangulation allows decreasing the overhead of re-building the triangulation [20], but unfortunately in three-dimensional system flipping can get “stuck” [7]. Furthermore, its parallelisation and maintaining of periodic boundary conditions (which are frequently used in particle simulations) is complicated.

The tree data structure for contact detection does not allow to choose cell sizes at every level of hierarchy independently, therefore, leaving no room for optimisation for various distribution of particle sizes [12, 31, 32]. Moreover, accessing neighbour sub-cubes in the tree is not straightforward since they can be nodes of different tree branches; no more details are given here since this method is not used any further.

The single-level grid-based contact-detection methods, like for example the linked-cell method [1, 9, 26], are straightforward, widely used and perform well for similarly sized objects. The problem of such methods is their inability to efficiently deal with particles of greatly varying sizes [10]. If the particles within the system are polydisperse, the cell size of a grid would have to conform to the largest particle size. Then many small particles may occupy the same cell, which increases the number of pairwise checks, and therefore affects the computational performance a lot.

This cell size problem can effectively be addressed by the use of multi-level hierarchical grids [3, 4, 8, 10, 15, 16, 22, 24]. Particles are positioned at different levels (according to their size) and collision checks are performed in two steps: (i) within the level of insertion (which is usually performed in the same way as in the linked-cell method), and (ii) cross-level checks. The cell size at each hierarchy level can be selected independently, therefore one can adapt grid cells according to a given particle size distribution. Several algorithms based on hierarchical grid data structures were employed, which differ in the way in which the above two steps are implemented.

The hierarchical grid data structure performs \(O(N)\) for arbitrary polydisperse systems and uses \(O(N)\) memory. This data structure is robust, can be easily parallelised, allows straightforward handling of periodic boundary conditions and can easily deal with unbounded systems. Moreover, it provides \(O(1)\) access to the particle data and to all particle nearest neighbours, and, more importantly, allows for \(O(1)\) particle insertion and removal from the system, which is often needed in modelling of dynamical systems, like for example hopper or granular flows. Finally, it provides a natural multi-scale framework as particles from different hierarchy levels usually have different physical properties besides their size. For example, small particles are often fast (i.e., have higher velocity/energy) and big ones are slow, so they can have different time scales at different hierarchy levels. These are the reasons why we chose the hierarchical grid as our primary data structure for contact detection and analyse how to optimise it for fastest contact detection in widely polydisperse particle mixtures.

The hierarchical grid data structure has many parameters to configure, i.e., an arbitrary choice of the number of hierarchy levels plus an arbitrary choice of the cell size at every level of hierarchy (the cell sizes are, as convention, increasing with increasing level of hierarchy). The choice of parameters affects the number of contact checks and the overhead of the algorithm, i.e., the number of times the cells are accessed. Due to the many parameters involved, finding the optimal ones, i.e., those which minimize an average number of calculations, \(T\), is a non-trivial problem. This involves multidimensional optimisation where the optimum dimension is unknown. We are not aware of any study where this question was fully addressed, except for the study by Ogarko et al. [22] in which the authors tried to address this problem by providing a hypothesis on the optimal choice of the hierarchical grid parameters, and then comparing their theoretical predictions with the simulation results.

In this study we theoretically analyse the performance complexity of the hierarchical grid data structure for contact detection in polydisperse particulate systems. We provide detailed analysis on the average number of calculations, \(T\), for two distinct algorithms based on the hierarchical grid data structure as applied to polydisperse systems of spherical particles with a power-law distribution of radii, for various power-law exponents, and for various particle volume fractions. We compare several ways (methods) of choosing the hierarchical grid parameters (i.e., the number of levels and the cell sizes at each level) and present the optimal parameter choice. We provide instructions on which hierarchical grid contact detection algorithm should be used and how to choose the optimal parameters for a given arbitrary distribution of particle radii. Finally, we compare our theoretical predictions with simulation results of realistic particle systems.

In the next section we outline the two different algorithms based on the hierarchical grid data structure that are used in this study. We then analyse the performance of the described algorithms and derive general estimates for the number of contact checks per particle in Sect. 3. Section 4 presents the types of particle size distributions considered in this study. In Sect. 5 we introduce several ways of choosing the hierarchical grid parameters and compare their expected performance. For selected parameters these expected performances are also compared with real discrete particle simulations, using the MercuryDPM code (mercurydpm.org) [11, 29, 30, 33]. Finally, the results are summarised and discussed, with some conclusions in Sect. 6.

## 2 Algorithm

The hierarchical grid (HGrid) algorithm is designed to determine all pairs of particles, in a set of \(N\) particles in a \(d\)-dimensional Euclidean space, that overlap or interact. The split between local particle geometry and global neighbour searching is achieved through the use of a bounding volume. This way, the contact detection algorithm is able to treat all particle shapes in the same, simplified way. While any bounding volume can be used, the sphere is chosen for this implementation since it is represented simply by a position of its centre \(\varvec{x}_{p}\) and its radius \(r_{p}\), and is rotationally invariant. For differently-sized spheres, \(r_{\text {min}}\) and \(r_{\text {max}}\) denote the minimum and the maximum particle radius, respectively, and \(\omega =r_{\text {max}}/r_{\text {min}}\) is the extreme size ratio.

The algorithm consists of two phases. In the first “mapping phase” all the particles are mapped into a hierarchical grid-space (Sect. 2.1). In the second “contact detection phase” (Sect. 2.2) the potential contact partners are determined for every particle in the system. This list of potentially contacting particle pairs is the output of the algorithm. With this list one can perform geometrical intersection tests to check if particles are really in contact, i.e., if they overlap. For spherical particles this can be achieved easily by comparing the distance between two particles with the sum of the radii. For non-spherical particles these tests become more difficult and computational expensive, however, this is beyond the scope of this paper.

All pairs of particles that are in contact must be in the list of potential contacts, i.e., the algorithm is not allowed to miss any pair.

The list of pairs of particles must be unique, i.e., no pair of contacts may appear twice in the list.

The list of pairs of particles should be as small as possible.

The computational time of the algorithm should be as small as possible, and for large \(N\), thus must scale linearly with the number of particles, i.e., \(O(N)\).

The memory consumption of the algorithm must be proportional to the number of particles, i.e., \(O(N)\).

### 2.1 Mapping phase

^{1}The first \(d\) components of a \((d+1)\)-dimensional vector \(\varvec{c}\) represent cell indices (integers), and the last one is the associated hierarchy level.

It must be noted that the cell size of each level can be set independently, in contrast to contact detection methods which use a tree structure for partitioning the domain [4, 31, 32], where the cell sizes are taken as double (or triple) the size of the previous lower level of hierarchy, hence \(s_{h+1} = 2 s_{h}\) (or \(3s_h\)). The flexibility of independent \(s_{h}\) allows one to select the optimal cell sizes, according to the particle size distribution, to improve the performance of the contact detection algorithm.

*level of insertion*to which particle \(p\) is mapped to. The level of insertion \(h(p)\) is the lowest level, where the cells are big enough to contain the particle \(p\):

### 2.2 Contact detection phase

After all particles are mapped to their cells, the contact detection phase is able to calculate all potential contacts. The contact detection is performed by looping over all particles \(p\) and searching for possible contacts with particles at the same hierarchy level \(h\) and for possible contacts at different hierarchy levels.

Searching for contacts at the same hierarchy level is performed using the classical linked-cell method [1]. The search is done in the cell where \(p\) is mapped to, i.e. \(\varvec{c}_{p}\), and in its neighbouring (surrounding) cells. Only half of the surrounding cells are searched, to avoid testing the same particle pair twice.

Searching for contacts at other hierarchy levels can be performed in two ways. The first one is the Top-Down method, illustrated in Fig. 1(top). In this method one searches for potential contacts only at levels \(j\) lower than the level of insertion: \(1\le j<h\). This implies that the particle \(p\) will be checked only against smaller particles, thus avoiding double checks for the same pair of particles. The second method, the Bottom-Up method, sketched in Fig. 1(bottom), does exactly the opposite. Here potential contacts are only searched for at hierarchy levels \(j\) higher than the level of insertion: \(h<j\le L\). This implies that the particle \(p\) will be checked only against larger particles, thus avoiding double checks for the same pair of particles.

- (1)Define the cells \(\varvec{c}^{\;\text {start}}\) and \(\varvec{c}^{\;\text {end}}\) at level \(j\) aswhere a search box (cube in 3D) is defined by \(\varvec{x}^{\;\pm }_{c} = \varvec{x}_{p} \pm \beta \sum _{i=1}^{d} \mathbf {e}_{i}\), with \(\beta = r_{p} + s_{j}/2\) and \(\mathbf {e}_{i}\) is the standard basis for \(\mathbb {R}^d\). Any particle \(q\) from level \(j\), with centre \(\varvec{x}_{q}\) outside this box can not be in contact with particle \(p\), since the diameter of the largest particle at this level can not exceed \(s_{j}\).$$\begin{aligned} \varvec{c}^{\;\text {start}} := M(\varvec{x}^{\;-}_{c}, j), \quad \text {and} \;\; \varvec{c}^{\;\text {end}} := M(\varvec{x}^{\;+}_{c}, j), \end{aligned}$$(4)
- (2)The search for potential contacts is performed in every cell \(\varvec{c} = (c_{1}, \ldots , c_{d}, j)\) for whichwhere \(c_{i}\) denotes the \(i\)-th component of vector \(\varvec{c}\). In other words, each particle which was mapped to one of these cells is tested for contact with particle \(p\).$$\begin{aligned} c_{i}^{\;\text {start}}\le c_{i}\le c_{i}^{\;\text {end}} \quad \text {for all} \; i \in [1,d], \end{aligned}$$(5)

## 3 Performance analysis

The algorithm is applicable to arbitrary systems, however, to estimate the performance of the algorithm, we restrict ourself to systems that are homogeneous in time and in space. In such a system accurate estimates can be obtained and optimal HGrid parameters can be found theoretically.

- (1)
\(T^{cd}\) (collision detection effort) The number of possible contacts that have to be examined more closely. The output of the HGrid-algorithm is a number of possible contacts. Optimum HGrid parameters lead to a low number of possible contacts, because for all these possible contacts a computationally expensive exact geometrical intersection test has to be performed to check if the particles really are in contact.

- (2)
\(T^{ca}\) (cells access effort) The number of times information is retrieved from a cell. While the goal of the HGrid is to obtain a list of all possible contacts, it comes at a (computational) cost. This cost is estimated by the number of times information is obtained from a single cell.

Random positions \(\varvec{x}_p\) within a \(d\)-dimensional box at packing fraction \(\nu \) (without excluded volume effects).

Random radii between \(r_{\text {min}}\) and \(r_{\text {max}}=\omega r_{\text {min}}\), according to a normalised probability density function \(f\left( r\right) \) (for more details see Sect. 4).

As described in Sect. 2.2, the algorithm checks for possible contacts at the level of insertion and at the other levels. For both types of contacts estimates of the number of possible contacts and the number of cells that have to be accessed are made in the following two subsections.

### 3.1 Level-of-insertion search

### 3.2 Cross-level search

### 3.3 Total computational work

The total computational work per level can now be calculated by just summing of its components.

## 4 Particle size probability distribution functions

\(\alpha =0\): Uniform size (rectangular) distribution, i.e., same number of bigger as smaller particles in intervals \(dr\).

\(\alpha =-2\): Uniform area distribution, i.e., the total surface area of particles with radii between \(r_1\) and \(r_1+dr\) is equal to the total surface area of particles with radii between \(r_2\) and \(r_2+dr\), etc.

\(\alpha =-3\): Uniform volume distribution, i.e., the total volume occupied by particles with radii between \(r_1\) and \(r_1+dr\) is equal to the total volume occupied by particles with radii between \(r_2\) and \(r_2+dr\), etc.

## 5 Cell sizes distribution

Having defined the HGrid algorithm, the last thing to do is to decide about the number of hierarchy levels \(L\) and the sizes associated with these levels \(s_h\). In this section four different cell size distributions are introduced, discussed and compared in order to find optimal HGrid parameters. In the end, the predicted performance of the algorithm is compared against real discrete particle method (DPM) simulations, using MercuryDPM (mercurydpm.org) [29, 33].

### 5.1 Single-level grid

### 5.2 Multi-level cell size distribution

#### 5.2.1 Linear cell size distribution

#### 5.2.2 Exponential cell size distribution

#### 5.2.3 Constant ratio of the number of particles per cell

#### 5.2.4 Optimal cell size distribution

In order to check if the constant number of particles per cell method indeed gives a close-to-optimal result, a numerical optimisation method is used to minimize Eq. (27) in terms of \(L\) and \(s_h\) under the conditions that \(s_{h+1}\ge s_{h}\), \(s_0=r_{\text {min}}\) and \(s_L=\omega r_{\text {min}}\). This is performed using the MATLAB [14] iterative optimisation function “fmincon”. In this function, a quadratic programming subproblem is solved at each iteration, where the Hessian of the Langrangian at each iteration is calculated using the BFGS algorithm.

### 5.3 Comparison of the cell size distribution functions

In this subsection the four different cell size distributions are compared and best practices are given. From Fig. 3 and the analysis of the single-level reference case it becomes clear that the HGrid algorithm is essential for \(\alpha \le -1\), however, the required optimal parameters are yet to be determined. The required computational effort for different particle distributions using the four cell size distribution functions is shown in Figs. 4, 5, 6, 7, 8. All of the used algorithms show a significant decrease in computational effort over the single-level reference case for all parameters of the particle size distribution function. Even more important, all but one (the Bottom-Up algorithm using a linear cell size distribution for \(\alpha =0\)) of the test cases show that for large polydispersities (i.e., high values of \(\omega \)) the optimal efficiency of the algorithm is independent on \(\omega \). Also the choice of the cell size distribution functions is not too important as long as the other HGrid parameters are chosen optimally. However, in practice it is often difficult or impossible to calculate optimal parameters in advance, due to changing particles, density or geometries. Therefore, the sensitivity of the algorithm to different parameters becomes important.

### 5.4 Comparison with simulations

The estimated computational efficiency of the HGrid algorithm is compared against DPM simulations to check the assumptions used in the derivation. This is done in two steps, first only a single contact detection step has been performed on particle positions obtained from real DPM simulations, while finally a full DPM simulation has been performed with optimal parameters and compared against a simulation using the linked-cell parameters.

#### 5.4.1 Contact detection test

For the single contact detection step, different packings of particles are generated, using a combination of event-driven and soft particle methods, for different packing fractions, particle size distributions and numbers of particles. More specific, we use homogeneous, isotropic, disordered systems of colliding elastic spherical particles in a cubical box with hard walls or periodic boundary conditions (for more details see Ref. [22]). In MercuryDPM (mercurydpm.org) [29, 33] a contact detection step has been run using the optimal cell size distribution, where both the number of times a cell is accessed and the number of narrow phase contact detection steps have been counted to compare against the theoretical predictions.

#### 5.4.2 Full DPM test

Throughout this paper the performance of the HGrid algorithm is estimated and measured from the number of times a cell is accessed and the number of narrow phase contact detection steps that are performed. In real DPM simulations, however, additional computational work is required, for example, during the integration routines, for handling periodic boundaries or walls and for writing data. To show the real improvement of using optimal HGrid parameters, a simple free cooling simulation with moderate polydispersity has been run using different HGrid parameters [13]. More specifically, we performed 2D simulations using solid walls with \(10^4\) particles over \(2.5\times 10^5\) time steps. The particle sizes are distributed according to the (truncated) power law size distribution with parameters \(\omega =20\) and \(\alpha =-3\), with a packing fraction of 0.4. According to our analysis the optimal parameters for this system are: the Top-Down algorithm with 5 levels with sizes 4.0, 7.9, 15.1, 27.2 and 40 times \(r_{min}\), respectively. These settings should give a speed-up of the contact detection part of the simulation by a factor of 35 when comparing against a linked-cell reference case (i.e., one level with size 40). The full DPM simulation with optimal parameters took about 27 min, whereas the reference case required 266 minutes. This speed-up by a factor of 9.9 is naturally lower than the predicted speed-up of the contact detection part due to force calculations, integration routines and wall interactions, but is still quite significant.

## 6 Summary and conclusion

Contact detection is a fundamental problem that occurs in many different kinds of simulation methods. This process is often computationally expensive, usually taking up a considerable proportion of CPU time, especially for systems with non-uniform density or polydisperse particle sizes.

In this paper, we studied analytically the computational effort of two algorithms for contact detection (i.e., Bottom-Up and Top-Down), based on the multi-level hierarchical grid data structure. The basic idea of these algorithms is the fact, that usually there are lots of particles in the system, which cannot be in contact, as they are too distant. The presented methods save a lot of time by excluding such particles from a detailed and time consuming contact examination and evaluation. The performance of the neighbour searching algorithm based on both the number of particles and the width of the particle size distribution, is of great importance.

As an input for the algorithm, the number of hierarchy levels and their cell sizes are required. Therefore, we tested four methods for choosing the hierarchical cell size distribution (i.e., linear, exponential, constant number of particles per cell and optimal) and compare their theoretical performance for a power law particle size distribution function with exponent \(\alpha \). For almost all methods the performance of the algorithm becomes independent of the width of the particle size distribution \(\omega \), in contrast to the linked-cell method. Even better, the computational effort per particle, using the algorithm decreases with increasing \(\omega \), or with decreasing \(\alpha \), at constant system packing fraction. In general, with optimal parameters, the algorithm is able to find contacts in arbitrarily polydisperse particle systems as fast as the linked-cell method finds contacts in purely monodisperse particle systems, i.e., no extra work is required due to polydispersity.

For the linear cell size distribution the optimal number of hierarchy levels is huge for systems with large polydispersity and \(\alpha <0\) (i.e., the systems dominated by small particles). Therefore, for these kinds of systems, the linear cell size distribution has high computational overhead and in general does not perform well, especially for particles with complex geometries. The exponential cell size distribution performs better, however, it is very sensitive to the number of hierarchy levels used. So it is not appropriate to use this method in dynamical systems, where the particle size distribution, density or system geometry is changing over time. Both the constant number of particles per cell and the optimal cell size distribution methods perform well, are not too sensitive to the number of levels, and have low overhead.

For \(\alpha \le -1\) the use of a multilevel grid becomes extremely efficient (i.e., orders of magnitude faster) as compared to the single level linked-cell method, if optimal parameters are used. On the other side, for \(\alpha >-1\), the use of a multilevel grid does not present a major advantage (but can improve performance slightly). In all our test cases (with optimum HGrid parameters), the contact detection time is estimated to be \(T \le 30 N\) for three-dimensional systems with spherical particles (where a unit of time is defined as the time required for a two-sphere overlap test). For future research, our analysis technique allows to investigate how the algorithm performs for other more realistic size distributions, e.g., log-normal.

## 7 Recommendations

Use the Top-Down algorithm.

If possible, perform your own minimisation using your exact system and the overhead factor \(K\) applicable to your solver, to obtain the optimal number of levels and their cell size distribution.

Otherwise, use a cell size distribution where the number of particles per cell is approximately the same at each level.

Since the optimum number of levels \(L\) depends on the particle size distribution function and the packing fraction no general recommendations can be given. For distributions similar to the (truncated) power law size distribution used throughout this paper we refer to Fig. 7.

## Footnotes

- 1.
The largest integer not greater than \(x\).

## Notes

### Acknowledgments

We would like to thank A. R. Thornton and S. González for helpful discussions. This research is supported by the Dutch Technology Foundation STW, which is the applied science division of NWO, and the Technology Programme of the Ministry of Economic Affairs, project number STW-MUST 10120, STW-HYDRO 12272, and STW-VICI grant 10828.

### References

- 1.Allen MP, Tildesley DJ (1989) Computer simulations of liquids. Clarendon Press, OxfordGoogle Scholar
- 2.Donev A, Stillinger FH, Torquato S (2005) Neighbor list collision-driven molecular dynamics simulation for nonspherical particles. I. Algorithmic details. J Comput Phys 202(2):737–764MATHMathSciNetCrossRefGoogle Scholar
- 3.Eitz M, Lixu G (2007) Hierarchical spatial hashing for real-time collision detection. In: Proceedings of the international conference on shape modeling and applications, SMI 2007,Lyon, pp 61–70Google Scholar
- 4.Ericson C (2004) Real-time collision detection., The Morgan Kaufmann Series in Interactive 3-D TechnologyMorgan Kaufmann Publishers Inc., San Francisco, CAGoogle Scholar
- 5.Gavrilova ML, Rokne JG (2002) Collision detection optimization in a multi-particle system. In: Proceedings of the international conference on computational science—part III, ICCS 2002, London, pp 105–114Google Scholar
- 6.Gilbert E, Johnson D, Keerthi S (1988) A fast procedure for computing the distance between complex objects in three-dimensional space. IEEE J Robot Autom 4(2):193–203CrossRefGoogle Scholar
- 7.Guibas L, Russel D (2004) An empirical comparison of techniques for updating delaunay triangulations. In: Proceedings of the 20th annual symposium on computational geometry, SCG 2004, pp 170–179Google Scholar
- 8.He K, Dong S, Zhou Z (2007) Multigrid contact detection method. Phys Rev E 75:036710CrossRefGoogle Scholar
- 9.Hockney RW, Eastwood JW (1981) Computer simulation using particles. McGraw-Hill, New YorkGoogle Scholar
- 10.Iwai T, Hong CW, Greil P (1999) Fast particle pair detection algorithms for particle simulations. Int J Mod Phys C 10(5):823–837CrossRefGoogle Scholar
- 11.Krijgsman D, Luding S (2013) 2D cyclic pure shear of granular materials, simulations and model. Proc AIP Conf 1524(1):1226–1229Google Scholar
- 12.Lu L, Gu Z, Lei K, Wang S, Kase K (2010) An efficient algorithm for detecting particle contact in non-uniform size particulate system. Particuology 8(2):127–132CrossRefGoogle Scholar
- 13.Luding S (2008) Introduction to discrete element methods : basic of contact force models and how to perform the micro-macro transition to continuum theory. Eur J Environ Civil Eng 12(7–8):785–826CrossRefGoogle Scholar
- 14.MATLAB (2009) MATLAB: version 7.9.0. The MathWorks Inc., NatickGoogle Scholar
- 15.Mio H, Shimosaka A, Shirakawa Y, Hidaka J (2006) Optimum cell condition for contact detection having a large particle size ratio in the discrete element method. J Chem Eng Jpn 39(4):409–416CrossRefGoogle Scholar
- 16.Mirtich B (1997) Efficient algorithms for two-phase collision detection. Technical Report TR-97-23. Mitsubishi Electric Research Laboratory, CambridgeGoogle Scholar
- 17.Mirtich B (1998) V-clip: fast and robust polyhedral collision detection. ACM Trans Graph 17:177–208CrossRefGoogle Scholar
- 18.Munjiza A (2004) The combined finite-discrete element method. Wiley, ChichesterCrossRefGoogle Scholar
- 19.Muth B, Müller MK, Eberhard P, Luding S (2007) Collision detection and administration methods for many particles with different sizes. In: Proceedings of the 4th international conference on discrete element methods, DEM 2007, BrisbaneGoogle Scholar
- 20.Ogarko V, Luding S (2010) Data structures and algorithms for contact detection in numerical simulation of discrete particle systems. In: Proceedings of the 6th world congress on particle technology, WCPT6, NurembergGoogle Scholar
- 21.Ogarko V, Luding S (2011) A study on the influence of the particle packing fraction on the performance of a multilevel contact detection algorithm. In: Proceedings of the 2nd international conference on particle-based methods—fundamentals and applications, Barcelona, pp 1–7Google Scholar
- 22.Ogarko V, Luding S (2012) A fast multilevel algorithm for contact detection of arbitrarily polydisperse objects. Comput Phys Commun 183(4):931–936CrossRefGoogle Scholar
- 23.Perkins E, Williams J (2001) A fast contact detection algorithm insensitive to object sizes. Eng Comput 18(1–2):48–61MATHCrossRefGoogle Scholar
- 24.Peters JF, Kala R, Maier RS (2009) A hierarchical search algorithm for discrete element method of greatly differing particle sizes. Eng Comput 26(6):621–634CrossRefGoogle Scholar
- 25.Pournin L (2005) On the behavior of spherical and non-spherical grain assemblies, its modeling and numerical simulation. Ph.D. thesis, EPFL, Lausanne Google Scholar
- 26.Quentrec D, Brot C (1973) New method for searching for neighbors in molecular dynamics computations. J Comput Phys 13(3):430–432CrossRefGoogle Scholar
- 27.Raschdorf S, Kolonko M (2011) A comparison of data structures for the simulation of polydisperse particle packings. Int J Numer Methods Eng 85:625–639Google Scholar
- 28.Schinner A (1999) Fast algorithms for the simulation of polygonal particles. Granul Matter 2(1):35–43CrossRefGoogle Scholar
- 29.Thornton A, Weinhart T, Luding S, Bokhove O (2012) Modeling of particle size segregation: calibration using the discrete particle method. Int J Modern Phys C 23(8):1240014CrossRefGoogle Scholar
- 30.Thornton AR, Krijgsman D, te Voortwis A, Ogarko V, Luding S, Fransen R, González S, Bokhove O, Imole O, Weinhart T (2013) A review of recent work on the discrete particle method at the university of twente: an introduction to the open-source package mercurydpm. In: Proceedings of the 6th international conference on discrete element methods, DEM-6 Golden, Denver, CO, pp 50–56Google Scholar
- 31.Ulrich T (2000) Loose octrees. In: Ulrich M (ed) Game programming gems. Charles River Media, Stamford, pp 444–453Google Scholar
- 32.Vemuri BC, Cao Y, Chen L (1998) Fast collision detection algorithms with applications to particle flow. Comput Graph Forum 17(2):121–134CrossRefGoogle Scholar
- 33.Weinhart T, Thornton A, Luding S, Bokhove O (2012) From discrete particles to continuum fields near a boundary. Granul Matter 14(2):289–294CrossRefGoogle Scholar
- 34.Williams J, O’Connor R (1995) Linear complexity intersection algorithm for discrete element simulation of arbitrary geometries. Eng Comput 12(2):185–201CrossRefGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.