Skip to main content
Log in

ParFUM: a parallel framework for unstructured meshes for scalable dynamic physics applications

  • Original Article
  • Published:
Engineering with Computers Aims and scope Submit manuscript

Abstract

Unstructured meshes are used in many engineering applications with irregular domains, from elastic deformation problems to crack propagation to fluid flow. Because of their complexity and dynamic behavior, the development of scalable parallel software for these applications is challenging. The Charm++ Parallel Framework for Unstructured Meshes allows one to write parallel programs that operate on unstructured meshes with only minimal knowledge of parallel computing, while making it possible to achieve excellent scalability even for complex applications. Charm++’s message-driven model enables computation/communication overlap, while its run-time load balancing capabilities make it possible to react to the changes in computational load that occur in dynamic physics applications. The framework is highly flexible and has been enhanced with numerous capabilities for the manipulation of unstructured meshes, such as parallel mesh adaptivity and collision detection.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24

Similar content being viewed by others

Notes

  1. Caveat: As floating-point arithmetic is not associative, shared nodes with more than two neighbors may not receive exactly the same value up to round-off if evaluated in different orders.

  2. Turing is a cluster of 640 Apple Xserves connected by Myrinet. Each node has dual 2 GHz G5 processors and 4 GB of RAM.

References

  1. Kalé L (2004) Performance and productivity in parallel programming via processor virtualization. In: Proceedings of the first international workshop on productivity and performance in high-end computing (at HPCA 10), Madrid, Spain

  2. Koenig G, Kale L (2005) Using message-driven objects to mask latency in grid computing applications. In: 19th IEEE international parallel and distributed processing symposium

  3. Kale L, Bhandarkar M, Brunner R (2000) Run-time support for adaptive load balancing. In: Rolim J (ed). In: Proceedings of 4th workshop on runtime systems for parallel programming (RTSPP), Cancun, Mexico. Lecture notes in computer science, vol 1800, pp 1152–1159

  4. Kale L, Kumar S, Vardarajan K (2003) A framework for collective personalized communication. In: Proceedings of IPDPS’03, Nice, France, p. 69

  5. Huang C (2004) System support for checkpoint and restart of charm++ and ampi applications. Master’s thesis, Department of Computer Science, University of Illinois

  6. Chakravorty S, Kale L (2004) A fault tolerant protocol for massively parallel machines. In: FTPDS workshop for IPDPS 2004. IEEE Press, San Alamitos

  7. Zheng G, Shi L, Kalé L (2004) Ftc-charm++: an in-memory checkpoint-based fault tolerant runtime for charm++ and mpi. In: 2004 IEEE international conference on cluster computing, San Diego, CA, USA

  8. Kalé L, Kumar S, Zheng G, Lee C (2003) Scaling molecular dynamics to 3000 processors with projections: a performance analysis case study. In: Terascale performance analysis workshop, international conference on computational science (ICCS), Melbourne, Australia, pp 23–32

  9. Bhandarkar M, Kalé L (2000) A parallel framework for explicit FEM. In: Valero M, Prasanna VK, Vajpeyam S (eds) Proceedings of the international conference on high performance computing (HiPC 2000). Lecture notes in computer science, vol 1970. Springer, Berlin Heidelberg New York, pp 385–395

  10. Huang C, Lawlor O, Kalé L (2003) Adaptive MPI. In: Proceedings of the 16th international workshop on languages and compilers for parallel computing (LCPC 03), College Station, TX, USA, pp 306–322

  11. Kale L, Krishnan S (1996) Charm++: parallel programming with message-driven objects. In: Wilson G, Lu P (eds) Parallel programming using C++. MIT Press, Cambridge, pp 175–213

    Google Scholar 

  12. Zheng G (2005) Achieving high performance on extremely large parallel machines. PhD thesis, Department of Computer Science, University of Illinois at Urbana-Champaign

  13. Kalé L (2002) The virtualization model of parallel programming : Runtime optimizations and the state of art. In: LACSI 2002, Albuquerque, NM, USA

  14. Message Passing Interface Forum (1993) MPI: a message passing interface. In: Proceedings of supercomputing ’93. IEEE Computer Society Press, San Alamitos, pp 878–883

  15. MPI Forum (1997) MPI-2: extensions to the message-passing interface. http://www.mpi-forum.org/docs/mpi-20-html/mpi2-report.html

  16. DeSouza J, Kalé L (2004) MSA: multiphase specifically shared arrays. In: Proceedings of the 17th international workshop on languages and compilers for parallel computing, West Lafayette, IN, USA

  17. Bhandarkar M, Kale L, Sturler E, Hoeflinger J (2001) Object-based adaptive load balancing for MPI programs. In: Proceedings of the international conference on computational science, San Francisco, CA, USA. LNCS 2074, pp 108–117

  18. Karypis G, Kumar V (1996) Parallel multilevel k-way partitioning scheme for irregular graphs. In: Supercomputing ’96: proceedings of the 1996 ACM/IEEE conference on supercomputing (CDROM), p 35

  19. Karypis G, Kumar V (1998) A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM J Sci Comput 20:359–392

    Article  MathSciNet  Google Scholar 

  20. Karypis G, Kumar V (1998) Multilevel k-way partitioning scheme for irregular graphs. J Parallel Distrib Comput 48:96–129

    Article  Google Scholar 

  21. Karypis G, Kumar V (1998) Multilevel algorithms for multi-constraint graph partitioning. In: Supercomputing ’98: proceedings of the 1998 ACM/IEEE conference on supercomputing (CDROM), Washington, DC, USA. IEEE Computer Society, Washington, pp 1–13

  22. Chakravorty S (2005) Implementation of parallel mesh partition and ghost generation for the finite element mesh framework. Master’s thesis, Department of Computer Science, University of Illinois

  23. Karypis G, Kumar V (1996) Parallel multilevel k-way partitioning scheme for irregular graphs. In: Supercomputing ’96: proceedings of the 1996 ACM/IEEE conference on supercomputing, p 35

  24. Abedi R, Chung S-H, Erickson J, Fan Y, Garland M, Guoy D, Haber R, Sullivan J, Thite S, Zhou Y (2004) Spacetime meshing with adaptive refinement and coarsening. In: SCG ’04: proceedings of the twentieth annual symposium on computational geometry, New York, NY, USA. ACM Press, New York, pp 300–309

  25. Kale L, Haber R, Booth J, Thite S, Palaniappan J (2003) An efficient parallel implementation of the spacetime discontinuous galerkin method using charm++. In: Proceedings of the 4th symposium on trends in unstructured mesh generation at the 7th US National Congress on computational mechanics

  26. Rivara M (1984) Algorithms for refining triangular grid suitable for adaptive and multigrid techniques. Int J Numer Methods Eng 20:745–756

    Article  MATH  MathSciNet  Google Scholar 

  27. Lawlor O (2001) A grid-based parallel collision detection algorithm. Master’s thesis, Department of Computer Science, University of Illinois

  28. Lawlor O, Kalé L (2002) A voxel-based parallel collision detection algorithm. In: Proceedings of the international conference in supercomputing. ACM Press, New York, pp 285–293

  29. Jeong J-H, Goldenfeld N, Dantzig J (2001) Phase field model for three-dimensional dendritic growth with fluid flow. Phys Rev E 64:1–14

    Article  Google Scholar 

  30. Stewart J, Edwards H (2004) A framework approach for developing parallel adaptive multiphysics applications. Finite Elem Anal Des 40:1599–1617

    Article  Google Scholar 

  31. DeCougny H, Shephard M (1999) Parallel refinement and coarsening of tetrahedral meshes. Int J Numer Methods Eng 46:1101–1125

    Article  MathSciNet  Google Scholar 

  32. Norton C, Lou J, Cwik T (2001) Status and directions for the pyramid parallel unstructured amr library. In: IPDPS. IEEE Computer Society, Washington, p 120

  33. Jiao X, Campbell M, Heath M (2003) Roccom: an object-oriented, data-centric software integration framework for multiphysics simulations. In: ICS ’03: proceedings of the 17th annual international conference on supercomputing, New York, NY, USA. ACM Press, New York, pp 358–368

  34. Jiao X, Zheng G, Lawlor O, Alexander P, Campbell M, Heath M, Fiedler R (2005) An integration framework for simulations of solid rocket motors. In: 41st AIAA/ASME/SAE/ASEE joint propulsion conference, Tucson, AZ, USA

  35. Barker K, Chernikov A, Chrisochoides N, Pingali K (2004) A load balancing framework for adaptive and asynchronous applications. IEEE Trans Parallel Distrib Syst 15:183–192

    Article  Google Scholar 

Download references

Acknowledgments

The authors wish to thank Milind Bhandarkar, the developer of ParFUM’s predecessor, the Charm++ FEM Framework, as well as Aaron Becker and Dima Ofman for their contributions to the new ParFUM codes. We would also like to thank our collaborators on the various applications that make use of ParFUM: Philippe Geubelle, Scot Breitenfeld, Sandhya Mangala and Robert Haber. This work was supported in part by the National Science Foundation (DMR 0121695 and NGS 0103645) and the Department of Energy (B341494).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Orion S. Lawlor.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lawlor, O.S., Chakravorty, S., Wilmarth, T.L. et al. ParFUM: a parallel framework for unstructured meshes for scalable dynamic physics applications. Engineering with Computers 22, 215–235 (2006). https://doi.org/10.1007/s00366-006-0039-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00366-006-0039-5

Keywords

Navigation