MeshCleaner: A Generic and Straightforward Algorithm for Cleaning Finite Element Meshes

  • Gang Mei
  • Salvatore Cuomo
  • Hong Tian
  • Nengxiong Xu
  • Linjun Peng
Article
Part of the following topical collections:
  1. Special Issue on Programming Models and Algorithms for Data Analysis in HPC Systems

Abstract

Mesh cleaning is the procedure of removing duplicate nodes, sequencing the indices of remaining nodes, and then updating the mesh connectivity for a topologically invalid Finite Element mesh. To the best of our knowledge, there has been no previously reported work specifically focused on the cleaning of large Finite Element meshes. In this paper we specifically present a generic and straightforward algorithm, MeshCleaner, for cleaning large Finite Element meshes. The presented mesh cleaning algorithm is composed of (1) the stage of compacting and reordering nodes and (2) the stage of updating mesh topology. The basic ideas for performing the above two stages efficiently both in sequential and in parallel are introduced. Furthermore, one serial and two parallel implementations of the algorithm MeshCleaner are developed on multi-core CPU and/or many-core GPU. To evaluate the performance of our algorithm, three groups of experimental tests are conducted. Experimental results indicate that the algorithm MeshCleaner is capable of cleaning large meshes very efficiently, both in sequential and in parallel. The presented mesh cleaning algorithm MeshCleaner is generic, simple, and practical.

Keywords

Finite Element mesh Data structure Mesh topology Parallel algorithm 

Notes

Acknowledgements

This research was supported by the Natural Science Foundation of China (Grant Nos. 11602235, 51674058 and 41602374), China Postdoctoral Science Foundation (2015M571081), and the Fundamental Research Funds for the Central Universities (2652015065).

References

  1. 1.
    Alhadeff, A., Leon, S.E., Celes, W., Paulino, G.H.: Massively parallel adaptive mesh refinement and coarsening for dynamic fracture simulations. Eng. Comput. 32(3), 533–552 (2016). doi: 10.1007/s00366-015-0431-0 CrossRefGoogle Scholar
  2. 2.
    Antepara, O., Lehmkuhl, O., Borrell, R., Chiva, J., Oliva, A.: Parallel adaptive mesh refinement for large-eddy simulations of turbulent flows. Comput. Fluids 110, 48–61 (2015). doi: 10.1016/j.compfluid.2014.09.050 MathSciNetCrossRefGoogle Scholar
  3. 3.
    Barlas, G.: Chapter 7—the thrust template library. In: Barlas, G. (ed.) Multicore and GPU Programming, pp. 527–573. Morgan Kaufmann, Boston (2015). doi: 10.1016/B978-0-12-417137-4.00007-1 CrossRefGoogle Scholar
  4. 4.
    Bell, N., Hoberock, J.: Chapter 26–thrust: a productivity-oriented library for CUDA. In: Hwu, W.M.W. (ed.) GPU Computing Gems, Jade Edition, Applications of GPU Computing Series, pp. 359–371. Morgan Kaufmann, Boston (2012). doi: 10.1016/B978-0-12-385963-1.00026-5 CrossRefGoogle Scholar
  5. 5.
    Bell, N., Hoberock, J., Rodrigues, C.: Chapter 16-thrust: a productivity-oriented library for CUDA. In: Kirk, D.B., Hwu, W.M.W. (eds.) Programming Massively Parallel Processors, 2nd edn, pp. 339–358. Morgan Kaufmann, Boston (2013). doi: 10.1016/B978-0-12-415992-1.00016-X CrossRefGoogle Scholar
  6. 6.
    Chen, J., Zheng, J., Zheng, Y., Xiao, Z., Si, H., Yao, Y.: Tetrahedral mesh improvement by shell transformation. Eng. Comput. (2016). doi: 10.1007/s00366-016-0480-z Google Scholar
  7. 7.
    Cuomo, S., De Michele, P., Piccialli, F.: 3D data denoising via nonlocal means filter by using parallel gpu strategies. Comput. Math. Methods Med. 2014, 14 (2014). doi: 10.1155/2014/523862 CrossRefMATHGoogle Scholar
  8. 8.
    Feng, D., Chernikov, A.N., Chrisochoides, N.P.: Two-level locality-aware parallel delaunay image-to-mesh conversion. Parallel Comput. 59, 60–70 (2016). doi: 10.1016/j.parco.2016.01.007 MathSciNetCrossRefGoogle Scholar
  9. 9.
    Freitas, M.O., Wawrzynek, P.A., Cavalcante-Neto, J.B., Vidal, C.A., Carter, B.J., Martha, L.F., Ingraffea, A.R.: Parallel generation of meshes with cracks using binary spatial decomposition. Eng. Comput. 32(4), 655–674 (2016). doi: 10.1007/s00366-016-0444-3 CrossRefGoogle Scholar
  10. 10.
    Hatipoglu, B., Ozturan, C.: Parallel triangular mesh refinement by longest edge bisection. SIAM J. Sci. Comput. 37(5), C574–C588 (2015). doi: 10.1137/140973840 MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    Hoberock, J., Bell, N.: Thrust—a parallel algorithms library (2017). https://thrust.github.io/
  12. 12.
    Lage, M., Martha, L.F., Moitinho de Almeida, J.P., Lopes, H.: Ibhm: index-based data structures for 2d and 3d hybrid meshes. Eng. Comput. (2015). doi: 10.1007/s00366-015-0395-0 Google Scholar
  13. 13.
    Laug, P., Guibault, F., Borouchaki, H.: Parallel meshing of surfaces represented by collections of connected regions. Adv. Eng. Softw. 103, 13–20 (2017). doi: 10.1016/j.advengsoft.2016.09.003 CrossRefGoogle Scholar
  14. 14.
    Leischner, N., Osipov, V., Sanders, P.: GPU sample sort. In: 2010 IEEE International Symposium on Parallel and Distributed Processing (IPDPS), pp. 1–10 (2010). doi: 10.1109/IPDPS.2010.5470444
  15. 15.
    Lo, S.: 3D delaunay triangulation of 1 billion points on a PC. Finite Elem. Anal. Des. 102C103, 65–73 (2015). doi: 10.1016/j.finel.2015.05.003 CrossRefGoogle Scholar
  16. 16.
    Lu, Q.K., Shephard, M.S., Tendulkar, S., Beall, M.W.: Parallel mesh adaptation for high-order finite element methods with curved element geometry. Eng. Comput. 30(2), 271–286 (2014). doi: 10.1007/s00366-013-0329-7 CrossRefGoogle Scholar
  17. 17.
    Mei, G., Tian, H.: Impact of data layouts on the efficiency of GPU-accelerated IDW interpolation. Springerplus 5, 104 (2016). doi: 10.1186/s40064-016-1731-6 CrossRefGoogle Scholar
  18. 18.
    Mei, G., Tipper, J.C., Xu, N.: A generic paradigm for accelerating laplacian-based mesh smoothing on the GPU. Arab. J. Sci. Eng. 39(11), 7907–7921 (2014). doi: 10.1007/s13369-014-1406-y CrossRefGoogle Scholar
  19. 19.
    NVIDIA: CUDA (Compute Unified Device Architecture) (2017). http://www.nvidia.com/object/cuda_home_new.html
  20. 20.
    OpenMP_ARB: The OpenMP API Specification for Parallel Programming (2017). http://www.openmp.org/
  21. 21.
    Palma, G., Comerci, M., Alfano, B., Cuomo, S., Michele, P.D., Piccialli, F., Borrelli, P.: 3D non-local means denoising via multi-GPU. In: 2013 Federated Conference on Computer Science and Information Systems, pp. 495–498 (2013)Google Scholar
  22. 22.
    Ranokphanuwat, R., Kittitornkun, S.: Parallel partition and merge QuickSort (PPMQSort) on multicore CPUs. J. Supercomput. 72(3), 1063–1091 (2016). doi: 10.1007/s11227-016-1641-y CrossRefGoogle Scholar
  23. 23.
    Sastry, S.P., Shontz, S.M.: A parallel log-barrier method for mesh quality improvement and untangling. Eng. Comput. 30(4), 503–515 (2014). doi: 10.1007/s00366-014-0362-1 CrossRefGoogle Scholar
  24. 24.
    Satish, N., Harris, M., Garland, M.: Designing efficient sorting algorithms for manycore GPUs. In: 2009 IEEE International Symposium on Parallel Distributed Processing, pp. 1–10 (2009). doi: 10.1109/IPDPS.2009.5161005
  25. 25.
    Satish, N., Kim, C., Chhugani, J., Nguyen, A.D., Lee, V.W., Kim, D., Dubey, P.: Fast sort on CPUs and GPUs: a case for bandwidth oblivious SIMD sort. In: Proceedings of the 2010 ACM SIGMOD International Conference on Management of Data, SIGMOD ’10, pp. 351–362. ACM, New York, NY, USA (2010). doi: 10.1145/1807167.1807207
  26. 26.
    Schepke, C., Maillard, N., Schneider, J., Heiss, H.U.: Online mesh refinement for parallel atmospheric models. Int. J. Parallel Prog. 41(4), 552–569 (2013). doi: 10.1007/s10766-012-0235-4 CrossRefGoogle Scholar
  27. 27.
    Sengupta, S., Harris, M., Zhang, Y., Owens, J.D.: Scan primitives for GPU computing. In: Proceedings of the 22Nd ACM SIGGRAPH/EUROGRAPHICS Symposium on Graphics Hardware. GH ’07, pp. 97–106. Eurographics Association, Aire-la-Ville, Switzerland, Switzerland (2007)Google Scholar
  28. 28.
    Si, H.: TetGen, a delaunay-based quality tetrahedral mesh generator. ACM Trans. Math. Softw. (2015). doi: 10.1145/2629697 MathSciNetMATHGoogle Scholar
  29. 29.
    Soner, S., Ozturan, C.: Generating multibillion element unstructured meshes on distributed memory parallel machines. Sci. Program. (2015). doi: 10.1155/2015/437480 Google Scholar
  30. 30.
    Xu, N., Tian, H.: Wire frame: a reliable approach to build sealed engineering geological models. Comput. Geosci. 35(8), 1582–1591 (2009). doi: 10.1016/j.cageo.2009.01.002 CrossRefGoogle Scholar
  31. 31.
    Xu, N., Tian, H., Kulatilake, P.H., Duan, Q.: Building a three dimensional sealed geological model to use in numerical stress analysis software: a case study for a dam site. Comput. Geotech. 38(8), 1022–1030 (2011). doi: 10.1016/j.compgeo.2011.07.013 CrossRefGoogle Scholar
  32. 32.
    Yilmaz, Y., Ozturan, C.: Using sequential NETGEN as a component for a parallel mesh generator. Adv. Eng. Softw. 84, 3–12 (2015). doi: 10.1016/j.advengsoft.2014.12.013 CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  1. 1.Department of Geological EngineeringQinghai UniversityXiningChina
  2. 2.School of Engineering and TechnologyChina University of GeosciencesBeijingChina
  3. 3.Department of Mathematics and Applications “R. Caccioppoli”University of Naples Federico IINaplesItaly
  4. 4.Faculty of EngineeringChina University of GeosciencesWuhanChina
  5. 5.Academician Pioneering ParkDalian UniversityDalianChina

Personalised recommendations