Skip to main content

Energy Efficient Runtime Framework for Exascale Systems

  • Conference paper
  • First Online:
Book cover High Performance Computing (ISC High Performance 2016)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9945))

Included in the following conference series:

  • 2344 Accesses

Abstract

Building an Exascale computer that solves scientific problems by three orders of magnitude faster as the current Petascale systems is harder than just making it huge. Towards the first Exascale computer, energy consumption has been emerged to a crucial factor. Every component will have to change to create an Exascale syestem, which capable of a million trillion of computing per second. To run efficiently on these huge systems and to take advantages of every computational power, software and underlying algorithms should be rewritten. While many computing intensive applications are designed to use Message Passing Interface (MPI) with two-sided communication semantics, a Partitioned Global Address Space (PGAS) is being designed, through providing an abstraction of the global address space, to treat a distributed system as if the memory were shared. The data locality and communication could be optimized through the one sided communication offered by PGAS. In this paper we present an energy aware runtime framework, which is PGAS based and offers MPI as a substrate communication layer.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. HOTSPOT. http://lava.cs.virginia.edu/HotSpot/

  2. MPI. http://www.mpi-forum.org/

  3. OSU MPI Benchmarks: OSU MVPICH. http://mvapich.cse.ohio-state.edu/benchmarks/

  4. BwUniCluster: BwUniCluster. http://www.bwhpc-c5.de/wiki/index.php

  5. Chetsa, G.L.T., Lefevre, L., Pierson, J.M., Stolf, P., Da Costa, G.: A runtime framework for energy efficient HPC systems without a priori knowledge of applications. In: Proceedings of the International Conference on Parallel and Distributed Systems - ICPADS, pp. 660–667 (2012)

    Google Scholar 

  6. Daily, J., Vishnu, A., Palmer, B., Van Dam, H., Kerbyson, D.: On the suitability of MPI as a PGAS runtime. In: 2014 21st International Conference on High Performance Computing, HiPC 2014 (2015)

    Google Scholar 

  7. Dinan, J., Balaji, P., Hammond, J.R., Krishnamoorthy, S., Tipparaju, V.: Supporting the global arrays PGAS model using MPI one-sided communication. In: Proceedings of the 2012 IEEE 26th International Parallel and Distributed Processing Symposium, IPDPS 2012, pp. 739–750 (2012)

    Google Scholar 

  8. El-Ghazawi, T., Carlson, W., Sterling, T., Yelick, K.: UPC: Distributed Shared Memory Programming. Wiley, New York (2005)

    Book  Google Scholar 

  9. EXACHA: Exascale challenges. http://science.energy.gov/ascr/research/scidac/exascale-challenges/

  10. Fürlinger, K., Glass, C., Gracia, J., Knüpfer, A., Tao, J., Hünich, D., Idrees, K., Maiterth, M., Mhedheb, Y., Zhou, H.: DASH: data structures and algorithms with support for hierarchical locality. In: Lopes, L., et al. (eds.) Euro-Par 2014. LNCS, vol. 8805, pp. 542–552. Springer, Heidelberg (2014). doi:10.1007/978-3-319-14313-2_46

    Google Scholar 

  11. Juqueen

    Google Scholar 

  12. Kandalla, K., Mancini, E.P., Sur, S., Panda, D.K.: Designing power-aware collective communication algorithms for infiniband clusters. In: Proceedings of the International Conference on Parallel Processing, pp. 218–227 (2010)

    Google Scholar 

  13. Krawezik, G., Cappello, F.: Performance comparison of MPI and OpenMP on shared memory multiprocessors. Concurr. Comput. Pract. Exp. 18(1), 29–61 (2006)

    Article  Google Scholar 

  14. Mametjanov, A., Min, M., Norris, B., Hovland, P.D.: Accelerating performance of NekCEM with MPI and CUDA. In: Super Computing (SC13) (2013)

    Google Scholar 

  15. Mhedheb, Y., Streit, A.: Energy-efficient Task Scheduling in Data Centers (2014)

    Google Scholar 

  16. PGAS. http://www.pgas.org/

  17. Shmem: OpenSHMEM. http://openshmem.org/site/

  18. SPPEXA: Software for Exascale SPPEXA. http://www.sppexa.de

  19. SuperMuc: SuperMUC. https://www.lrz.de/services/compute/supermuc/

  20. Vishnu, A., Song, S., Marquez, A., Barker, K., Kerbyson, D., Cameron, K., Balaji, P.: Designing energy efficient communication runtime systems: a view from PGAS models. J. Supercomput. 63(3), 691–709 (2013)

    Article  Google Scholar 

  21. Yang, C., Bland, W., Mellor-Crummey, J., Balaji, P.: Portable, MPI-interoperable coarray fortran. In: Proceedings of the 19th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming - PPopp 2014, pp. 81–92 (2014)

    Google Scholar 

  22. Zhou, H., Idrees, K., Gracia, J.: Leveraging MPI-3 shared-memory extensions for efficient PGAS runtime systems. In: Träff, J.L., Hunold, S., Versaci, F. (eds.) Euro-Par 2015. LNCS, vol. 9233, pp. 373–384. Springer, Heidelberg (2015). doi:10.1007/978-3-662-48096-0_29

    Chapter  Google Scholar 

  23. Zhou, H., Mhedheb, Y., Idrees, K., Glass, C.W.: DART-MPI: an MPI-based implementation of a PGAS runtime system. In: PGAS 2014 (2014)

    Google Scholar 

Download references

Acknowledgement

We gratefully acknowledge funding by the German Research Foundation (DFG) through the German Priority Programme 1648 Software for Exascale Computing (SPPEXA).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yousri Mhedheb .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Mhedheb, Y., Streit, A. (2016). Energy Efficient Runtime Framework for Exascale Systems. In: Taufer, M., Mohr, B., Kunkel, J. (eds) High Performance Computing. ISC High Performance 2016. Lecture Notes in Computer Science(), vol 9945. Springer, Cham. https://doi.org/10.1007/978-3-319-46079-6_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-46079-6_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-46078-9

  • Online ISBN: 978-3-319-46079-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics