Mapping fine-grained power measurements to HPC application runtime characteristics on IBM POWER7

  • Michael Knobloch
  • Maciej Foszczynski
  • Willi Homberg
  • Dirk Pleiter
  • Hans Böttiger
Special Issue Paper

Abstract

Optimization of energy consumption is a key issue for future HPC. Evaluation of energy consumption requires a fine-grained power measurement. Additional useful information is obtained when performing these measurements at component level. In this paper we describe a setup which allows to perform fine-grained power measurements up to a 1 ms resolution at component level on IBM POWER (IBM and POWER are trademarks of IBM in USA and/or other countries.) machines. We further developed a plug-in for VampirTrace that allows us to correlate these power measurements with application performance characteristics, e.g. obtained by hardware performance counters. This environment enables us to generate both power and performance profiles. Such profiles provide valuable input to develop future strategies for improving workload-driven energy usage per performance. We show in comparison with power profiles of coarser granularity that these fine-grained measurements are necessary to capture the dynamics of power switching.

Keywords

Energy Performance Power consumption POWER7 

References

  1. 1.
    Brochard L, Panda R, Vemuganti S (2010) Optimizing performance and energy of hpc applications on power7. Comput Sci Res Dev 25(3–4):135–140. http://www.springerlink.com/index/10.1007/s00450-010-0123-3 CrossRefGoogle Scholar
  2. 2.
    Floyd M, Allen-Ware M, Rajamani K, Brock B, Lefurgy C, Drake A, Pesantez L, Gloekler T, Tierno J, Bose P, Buyuktosunoglu A (2011) Introducing the adaptive energy management features of the power7 chip. IEEE MICRO 31(2):60–75. doi:10.1109/MM.2011.29 CrossRefGoogle Scholar
  3. 3.
    Ge R, Feng X, Song S, Chang HC, Li D, Cameron K (2010) PowerPack: energy profiling and analysis of high-performance systems and applications. IEEE Trans Parallel Distrib Syst 21(5):658–671. doi:10.1109/TPDS.2009.76 CrossRefGoogle Scholar
  4. 4.
    Geimer M, Wolf F, Wylie BJN, Ábrahám E, Becker D, Mohr B (2010) The scalasca performance toolset architecture. Concurr Comput, Pract Exp 22(6):702–719. doi:10.1002/cpe.1556 Google Scholar
  5. 5.
    Hennecke M, Frings W, Homberg W, Zitz A, Knobloch M, Böttiger H (2012) Measuring power consumption on ibm blue gene/p. Comput Sci Res Dev. doi:10.1007/s00450-011-0192-y Google Scholar
  6. 6.
    Kamil S, Shalf J, Strohmaier E (2008) Power efficiency in high performance computing. In: IEEE international symposium on parallel and distributed processing, pp 1–8 Google Scholar
  7. 7.
    Knüpfer A, Brunst H, Doleschal J, Jurenz M, Lieber M, Mickler H, Müller MS, Nagel WE (2008) The vampir performance analysis tool-set. In: Tools for high performance computing. Proceedings of the 2nd international workshop on parallel tools. Springer, Berlin, pp 139–155 CrossRefGoogle Scholar
  8. 8.
    Knüpfer A, Rössel C, an Mey D, Biersdorff S, Diethelm K, Eschweiler D, Geimer M, Gerndt M, Lorenz D, Malony AD, Nagel WE, Oleynik Y, Philippen P, Saviankou P, Schmidl D, Shende SS, Tschüter R, Wagner M, Wesarg B, Wolf F (2012) Score-P—A joint performance measurement run-time infrastructure for periscope, scalasca, TAU, and vampir. In: Proc. of 5th parallel tools, Workshop, 2011, Dresden, Germany. Springer, Berlin, pp 79–91 Google Scholar
  9. 9.
    Lefurgy C, Wang X, Ware M (2007) Server-level power control. In: Proceedings of the IEEE international conference on autonomic computing (ICAC) Google Scholar
  10. 10.
    Lively C, Wu X, Taylor V, Moore S, Chang HC, Cameron K (2011) Energy and performance characteristics of different parallel implementations of scientific applications on multicore systems. Int J High Perform Comput Appl 25(3):342–350. doi:10.1177/1094342011414749 CrossRefGoogle Scholar
  11. 11.
    Minartz T, Molka D, Knobloch M, Krempel S, Ludwig T, Nagel WE, Mohr B, Falter H (2012) Eeclust: energy-efficient cluster computing. In: Bischof C, Hegering HG, Nagel WE, Wittum G (eds) Competence in high performance computing 2010. Springer, Berlin, pp 111–124. doi:10.1007/978-3-642-24025-6_10 Google Scholar
  12. 12.
    Schöne R, Tschüter R, Ilsche T, Hackenberg D (2011) The vampirtrace plugin counter interface: introduction and examples. In: Proceedings of the 2010 conference on parallel processing, Euro-Par 2010. Springer, Berlin, pp 501–511 Google Scholar
  13. 13.
    Sutmann G, Westphal L, Bolten M (2010) Particle based simulations of complex systems with mp2c: hydrodynamics and electrostatics. AIP Conf Proc 1281(1):1768–1772. doi:10.1063/1.3498216 CrossRefGoogle Scholar
  14. 14.
    Terpstra D, Jagode H, You H, Dongarra J (2010) Collecting performance data with papi-c. In: Müller MS, Resch MM, Schulz A, Nagel WE (eds) Tools for high performance computing 2009. Springer, Berlin, pp 157–173. doi:10.1007/978-3-642-11261-4_11 CrossRefGoogle Scholar
  15. 15.
    Winkel M, Speck R, Hübner H, Arnold L, Krause R, Gibbon P (2012) A massively parallel, multi-disciplinary Barnes–hut tree code for extreme-scale n-body simulations. Comput Phys Commun 183(4):880–889. doi:10.1016/j.cpc.2011.12.013 CrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Michael Knobloch
    • 1
  • Maciej Foszczynski
    • 1
  • Willi Homberg
    • 1
  • Dirk Pleiter
    • 1
  • Hans Böttiger
    • 2
  1. 1.Jülich Supercomputing Centre (JSC)JülichGermany
  2. 2.IBM Deutschland Research & Development GmbHBöblingenGermany

Personalised recommendations