Advertisement

Performance and Power-Aware Modeling of MPI Applications for Cluster Computing

  • Jerzy ProficzEmail author
  • Paweł CzarnulEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9574)

Abstract

The paper presents modeling of performance and power consumption when running parallel applications on modern cluster-based systems. The model includes basic so-called blocks representing either computations or communication. The latter includes both point-to-point and collective communication. Real measurements were performed using MPI applications and routines run on three different clusters with both Infiniband and Gigabit Ethernet interconnects. Regression allowed to obtain specific coefficients for particular systems, all modeled with the same formulas. The model has been incorporated into the MERPSYS environment for modeling parallel applications and simulation of execution on large-scale cluster and volunteer based systems. Using specific application and system models, MERPSYS allows to predict application execution time, reliability and power consumption of resources used during computations. Consequently, the proposed models for computational and communication blocks are of utmost importance for the environment.

Keywords

Performance model Energy consumption Cluster computing MPI 

Notes

Acknowledgments

The work was performed within grant “Modeling efficiency, reliability and power consumption of multilevel parallel HPC systems using CPUs and GPUs” sponsored by the National Science Center in Poland based on decision no DEC-2012/07/B/ST6/01516.

References

  1. 1.
    Dongarra, J.: Emerging heterogeneous technologies for high performance computing. In: Heterogeneity in Computing Workshop (2013). http://www.netlib.org/utk/people/JackDongarra/SLIDES/hcw-0513.pdf
  2. 2.
    Czarnul, P., Rościszewski, P.: Optimization of execution time under power consumption constraints in a heterogeneous parallel system with GPUs and CPUs. In: Chatterjee, M., Cao, J., Kothapalli, K., Rajsbaum, S. (eds.) ICDCN 2014. LNCS, vol. 8314, pp. 66–80. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  3. 3.
    Bak, S., Krystek, M., Kurowski, K., Oleksiak, A., Piatek, W., Weglarz, J.: GSSIM - a tool for distributed computing experiments. sci. program. 19, 231–251 (2011)Google Scholar
  4. 4.
    Hockney, R.W.: The communication challenge for mpp: Intel paragon and meiko cs-2. Parallel Comput. 20, 389–398 (1994)CrossRefGoogle Scholar
  5. 5.
    Culler, D., Karp, R., Patterson, D., Sahay, A., Schauser, K.E., Santos, E., Subramonian, R., von Eicken, T.: Logp: towards a realistic model of parallel computation. In: Proceedings of the Fourth ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPOPP 1993, pp. 1–12. ACM, New York (1993)Google Scholar
  6. 6.
    Alexandrov, A., Ionescu, M.F., Schauser, K.E., Scheiman, C.: Loggp: Incorporating long messages into the logp model—one step closer towards a realistic model for parallel computation. In: Proceedings of the Seventh Annual ACM Symposium on Parallel Algorithms and Architectures, SPAA 1995, pp. 95–105. ACM, New York (1995)Google Scholar
  7. 7.
    Kielmann, T., Bal, H.E., Verstoep, K.: Fast measurement of LogP parameters for message passing platforms. In: Rolim, J.D.P. (ed.) IPDPS-WS 2000. LNCS, vol. 1800, pp. 1176–1183. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  8. 8.
    Bosque, J.L., Perez, L.P.: Hloggp: a new parallel computational model for heterogeneous clusters. In: CCGRID, pp. 403–410. IEEE Computer Society (2004)Google Scholar
  9. 9.
    Chui, C.K.: The logp and mlogp models for parallel image processing with multi-core microprocessor. In: Proceedings of the 2010 Symposium on Information and Communication Technology, SoICT 2010, pp. 23–27. ACM, New York (2010)Google Scholar
  10. 10.
    Cameron, K.W., Ge, R., Sun, X.: \(\log _{{\rm n}}\)p and \(\log _{3}\)p: accurate analytical models of point-to-point communication in distributed systems. IEEE Trans. Comput. 56, 314–327 (2007)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Pjesivac-Grbović, J., Fagg, G.E., Angskun, T., Bosilca, G., Dongarra, J.J.: Mpi collective algorithm selection and quadtree encoding. In: Proceedings of the 13th European PVM/MPI Users’ Group Meeting, Bonn, Germany (2006)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Academic Computer Center TASK, Faculty of Electronics, Telecommunications and InformaticsGdańsk University of TechnologyGdańskPoland

Personalised recommendations