Abstract
Building an Exascale computer that solves scientific problems by three orders of magnitude faster as the current Petascale systems is harder than just making it huge. Towards the first Exascale computer, energy consumption has been emerged to a crucial factor. Every component will have to change to create an Exascale syestem, which capable of a million trillion of computing per second. To run efficiently on these huge systems and to take advantages of every computational power, software and underlying algorithms should be rewritten. While many computing intensive applications are designed to use Message Passing Interface (MPI) with two-sided communication semantics, a Partitioned Global Address Space (PGAS) is being designed, through providing an abstraction of the global address space, to treat a distributed system as if the memory were shared. The data locality and communication could be optimized through the one sided communication offered by PGAS. In this paper we present an energy aware runtime framework, which is PGAS based and offers MPI as a substrate communication layer.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
HOTSPOT. http://lava.cs.virginia.edu/HotSpot/
OSU MPI Benchmarks: OSU MVPICH. http://mvapich.cse.ohio-state.edu/benchmarks/
BwUniCluster: BwUniCluster. http://www.bwhpc-c5.de/wiki/index.php
Chetsa, G.L.T., Lefevre, L., Pierson, J.M., Stolf, P., Da Costa, G.: A runtime framework for energy efficient HPC systems without a priori knowledge of applications. In: Proceedings of the International Conference on Parallel and Distributed Systems - ICPADS, pp. 660–667 (2012)
Daily, J., Vishnu, A., Palmer, B., Van Dam, H., Kerbyson, D.: On the suitability of MPI as a PGAS runtime. In: 2014 21st International Conference on High Performance Computing, HiPC 2014 (2015)
Dinan, J., Balaji, P., Hammond, J.R., Krishnamoorthy, S., Tipparaju, V.: Supporting the global arrays PGAS model using MPI one-sided communication. In: Proceedings of the 2012 IEEE 26th International Parallel and Distributed Processing Symposium, IPDPS 2012, pp. 739–750 (2012)
El-Ghazawi, T., Carlson, W., Sterling, T., Yelick, K.: UPC: Distributed Shared Memory Programming. Wiley, New York (2005)
EXACHA: Exascale challenges. http://science.energy.gov/ascr/research/scidac/exascale-challenges/
Fürlinger, K., Glass, C., Gracia, J., Knüpfer, A., Tao, J., Hünich, D., Idrees, K., Maiterth, M., Mhedheb, Y., Zhou, H.: DASH: data structures and algorithms with support for hierarchical locality. In: Lopes, L., et al. (eds.) Euro-Par 2014. LNCS, vol. 8805, pp. 542–552. Springer, Heidelberg (2014). doi:10.1007/978-3-319-14313-2_46
Juqueen
Kandalla, K., Mancini, E.P., Sur, S., Panda, D.K.: Designing power-aware collective communication algorithms for infiniband clusters. In: Proceedings of the International Conference on Parallel Processing, pp. 218–227 (2010)
Krawezik, G., Cappello, F.: Performance comparison of MPI and OpenMP on shared memory multiprocessors. Concurr. Comput. Pract. Exp. 18(1), 29–61 (2006)
Mametjanov, A., Min, M., Norris, B., Hovland, P.D.: Accelerating performance of NekCEM with MPI and CUDA. In: Super Computing (SC13) (2013)
Mhedheb, Y., Streit, A.: Energy-efficient Task Scheduling in Data Centers (2014)
PGAS. http://www.pgas.org/
Shmem: OpenSHMEM. http://openshmem.org/site/
SPPEXA: Software for Exascale SPPEXA. http://www.sppexa.de
SuperMuc: SuperMUC. https://www.lrz.de/services/compute/supermuc/
Vishnu, A., Song, S., Marquez, A., Barker, K., Kerbyson, D., Cameron, K., Balaji, P.: Designing energy efficient communication runtime systems: a view from PGAS models. J. Supercomput. 63(3), 691–709 (2013)
Yang, C., Bland, W., Mellor-Crummey, J., Balaji, P.: Portable, MPI-interoperable coarray fortran. In: Proceedings of the 19th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming - PPopp 2014, pp. 81–92 (2014)
Zhou, H., Idrees, K., Gracia, J.: Leveraging MPI-3 shared-memory extensions for efficient PGAS runtime systems. In: Träff, J.L., Hunold, S., Versaci, F. (eds.) Euro-Par 2015. LNCS, vol. 9233, pp. 373–384. Springer, Heidelberg (2015). doi:10.1007/978-3-662-48096-0_29
Zhou, H., Mhedheb, Y., Idrees, K., Glass, C.W.: DART-MPI: an MPI-based implementation of a PGAS runtime system. In: PGAS 2014 (2014)
Acknowledgement
We gratefully acknowledge funding by the German Research Foundation (DFG) through the German Priority Programme 1648 Software for Exascale Computing (SPPEXA).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Mhedheb, Y., Streit, A. (2016). Energy Efficient Runtime Framework for Exascale Systems. In: Taufer, M., Mohr, B., Kunkel, J. (eds) High Performance Computing. ISC High Performance 2016. Lecture Notes in Computer Science(), vol 9945. Springer, Cham. https://doi.org/10.1007/978-3-319-46079-6_3
Download citation
DOI: https://doi.org/10.1007/978-3-319-46079-6_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-46078-9
Online ISBN: 978-3-319-46079-6
eBook Packages: Computer ScienceComputer Science (R0)