Skip to main content

A Fine-Granular Programming Scheme for Irregular Scientific Applications

  • Conference paper
  • First Online:
Advanced Computer Architecture (ACA 2016)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 626))

Included in the following conference series:

  • 908 Accesses

Abstract

HPC systems are widely used for accelerating calculation-intensive irregular applications, e.g., molecular dynamics (MD) simulations, astrophysics applications, and irregular grid applications. As the scalability and complexity of current HPC systems keeps growing, it is difficult to parallelize these applications in an efficient fashion due to irregular communication patterns, load imbalance issues, dynamic characteristics, and many more. This paper presents a fine granular programming scheme, on which programmers are able to implement parallel scientific applications in a fine granular and SPMD (single program multiple data) fashion. Different from current programming models starting from the global data structure, this programming scheme provides a high-level and object-oriented programming interface that supports writing applications by focusing on the finest granular elements and their interactions. Its implementation framework takes care of the implementation details e.g., the data partition, automatic EP aggregation, memory management, and data communication. The experimental results on SuperMUC show that the OOP implementations of multi-body and irregular applications have little overhead compared to the manual implementations using C++ with OpenMP or MPI. However, it improves the programming productivity in terms of the source code size, the coding method, and the implementation difficulty.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Board, J.A., Hakura, Z., Elliott, W., Gray, D., Blanke, W., Leathrum, J.F.: Scalable implementations of multipole-accelerated algorithms for molecular dynamics. In: 1994 Proceedings of the Scalable High-Performance Computing Conference, pp. 87–94, May 1994

    Google Scholar 

  2. Boyd, D., Milosevich, S.: Supercomputing and drug discovery research. Perspect. Drug Discovery Des. 1, 345–358 (1993). http://dx.doi.org/10.1007/BF02174534

    Article  Google Scholar 

  3. Clementi, E., Chin, S., Corongiu, G., Detrich, J., Dupuis, M., Folsom, D., Lie, G., Logan, D., Sonnad, V.: Supercomputing and super computers: for science and engineering in general and for chemistry and biosciences in particular. In: Theophanides, T. (ed.) Spectroscopy of Inorganic Bioactivators. NATO ASI Series, vol. 280, pp. 1–112. Springer, Netherlands (1989)

    Chapter  Google Scholar 

  4. Kremer, K.: Supercomputing in polymer research. In: Gentzsch, W., Harms, U. (eds.) HPCN-Europe 1994. LNCS, vol. 796, pp. 244–253. Springer, Heidelberg (1994)

    Chapter  Google Scholar 

  5. Board, O.A.R.: OpenMP Application Program Interface. OpenMP, Specification (2011). http://www.openmp.org/mpdocuments/OpenMP3.1.pdf

  6. Chapman, B., Jost, G., Pas, R.: Using OpenMP: Portable Shared Memory Parallel Programming (Scientific and Engineering Computation). The MIT Press, Cambridge (2007)

    Google Scholar 

  7. Snir, M., Otto, S., Huss-Lederman, S., Walker, D., Dongarra, J.: MPI-The Complete Reference. The MPI Core, vol. 1, 2nd edn. MIT Press, Cambridge (1998)

    Google Scholar 

  8. Pacheco, P.S.: Parallel programming with MPI. Morgan Kaufmann Publishers Inc., San Francisco (1996)

    MATH  Google Scholar 

  9. Gropp, W., Lusk, E., Skjellum, A.: Using MPI: Portable Parallel Programming with the Message-Passing Interface. MIT Press, Cambridge (1994)

    MATH  Google Scholar 

  10. NVIDIA Corporation: NVIDIA CUDA Compute Unified Device Architecture - Programming Guide (2007)

    Google Scholar 

  11. Nickolls, J., Buck, I., Garland, M., Skadron, K.: Scalable parallel programming with CUDA. Queue 6(2), 40–53 (2008). http://doi.acm.org/10.1145/1365490.1365500

    Article  Google Scholar 

  12. Schreiber, R.: An introduction to HPF. In: Perrin, G.-R., Darte, A. (eds.) The Data Parallel Programming Model. LNCS, vol. 1132, pp. 27–44. Springer, Heidelberg (1996)

    Chapter  Google Scholar 

  13. Kennedy, K., Koelbel, C.: High performance fortran 2.0. In: Pande, S., Agrawal, D.P. (eds.) Compiler Optimizations for Scalable Parallel Systems. LNCS, vol. 1808, pp. 3–43. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  14. Kale, L.V., Krishnan, S.: Charm++: a portable concurrent object oriented system based on C++. SIGPLAN Not. 28(10), 91–108 (1993). http://doi.acm.org/10.1145/167962.165874

    Article  Google Scholar 

  15. Kale, L.V., Ramkumar, B., Sinha, A.B., Gursoy, A.: The CHARM parallel programming language, system: Part I - Description of language features. Parallel Program. Lab. Tech. Rep. #95-02 1, 1–15 (1994)

    Google Scholar 

  16. Kale, L.V., Ramkumar, B., Sinha, A.B., Saletore, V.A.: The CHARM parallel programming language, system: Part II - The runtime system. Parallel Program. Lab. Tech. Rep. #95-03 1, 1–14 (1994)

    Google Scholar 

  17. Intel: TBB (Intel Threading Building Blocks). In: Padua, D. (ed.) Encyclopedia of Parallel Computing, p. 2029. Springer, Heidelberg (2011)

    Google Scholar 

  18. Russell, G., Keir, P., Donaldson, A.F., Dolinsky, U., Richards, A., Riley, C.: Programming heterogeneous multicore systems using threading building blocks. In: Guarracino, M.R., et al. (eds.) Euro-Par-Workshop 2010. LNCS, vol. 6586, pp. 117–125. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  19. Molner, S.P.: The art of molecular dynamics simulation (Rapaport, D. C.). J. Chem. Educ. 76(2), 171 (1999). http://pubs.acs.org/doi/abs/10.1021/ed076p171

    Article  Google Scholar 

  20. Aarseth, S.J.: Gravitational N-Body Simulations. Cambridge University Press, Cambridge (2003). http://dx.doi.org/10.1017/CBO9780511535246

    Book  MATH  Google Scholar 

  21. LRZ: SuperMuc petascale system (2012). https://www.lrz.de/services/compute/supermuc/systemdescription/

  22. Karypis, G., Kumar, V., MeTis: Unstrctured Graph Partitioning and Sparse Matrix Ordering System, Version 2.0 (1995). http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.38.376

  23. Das, R., shin Hwang, Y., Uysal, M., Saltz, J., Sussman, A.: Applying the CHPAOS/PARTI library to irregular problems in computational chemistry and computational aerodynamics, in Mississippi State University, Starkville, MS, pp. 45–56. IEEE Computer Society Press (1993)

    Google Scholar 

  24. Bericht, I., Gerndt, M.: Parallelization of the AVL FIRE benchmark with SVM-Fortran (1995)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haowei Huang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Science+Business Media Singapore

About this paper

Cite this paper

Huang, H., Jiang, L., Dong, W., Chang, R., Hou, Y., Gerndt, M. (2016). A Fine-Granular Programming Scheme for Irregular Scientific Applications. In: Wu, J., Li, L. (eds) Advanced Computer Architecture. ACA 2016. Communications in Computer and Information Science, vol 626. Springer, Singapore. https://doi.org/10.1007/978-981-10-2209-8_12

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-2209-8_12

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-2208-1

  • Online ISBN: 978-981-10-2209-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics