Computer Science - Research and Development

, Volume 27, Issue 4, pp 329–336

Measuring power consumption on IBM Blue Gene/P

  • Michael Hennecke
  • Wolfgang Frings
  • Willi Homberg
  • Anke Zitz
  • Michael Knobloch
  • Hans Böttiger
Open Access
Special Issue Paper

Abstract

Energy efficiency is a key design principle of the IBM Blue Gene series of supercomputers, and Blue Gene systems have consistently gained top GFlops/Watt rankings on the Green500 list. The Blue Gene hardware and management software provide built-in features to monitor power consumption at all levels of the machine’s power distribution network. This paper presents the Blue Gene/P power measurement infrastructure and discusses the operational aspects of using this infrastructure on Petascale machines. We also describe the integration of Blue Gene power monitoring capabilities into system-level tools like LLview, and highlight some results of analyzing the production workload at Research Center Jülich (FZJ).

Keywords

Blue Gene Energy efficiency Power consumption 

References

  1. 1.
    The Top500 list (November 2010). http://www.top500.org/lists/2010/11
  2. 2.
  3. 3.
    Kogge P (2009) Energy at exaflops. SC09 exascale panel. http://www.exascale.org/mediawiki/images/6/6e/Sc09-exa-panel-kogge.pdf
  4. 4.
    Bekas C, Curioni A (2010) A new energy aware performance metric. Comput Sci Res Dev 25:187–195. doi:10.1007/s00450-010-0119-z CrossRefGoogle Scholar
  5. 5.
    Moreira J et al. (2007) The Blue Gene/L supercomputer: a hardware and software story. Int J Parallel Program 35(3):181–206. doi:10.1007/s10766-007-0037-2 CrossRefGoogle Scholar
  6. 6.
    Gara A et al. (2005) Overview of the Blue Gene/L system architecture. IBM J Res Dev 49(2/3):195–212. doi:10.1147/rd.492.0195 CrossRefGoogle Scholar
  7. 7.
    Coteus P et al. (2005) Packaging the Blue Gene/L supercomputer. IBM J Res Dev 49(2/3):213–248. doi:10.1147/rd.492.0213 CrossRefGoogle Scholar
  8. 8.
    Moreira J et al. (2005) Blue Gene/L programming and operating environment. IBM J Res Dev 49(2/3):367–376. doi:10.1147/rd.492.0367 CrossRefGoogle Scholar
  9. 9.
    IBM Blue Gene team (2008) Overview of the IBM Blue Gene/P project. IBM J Res Dev 52(1/2):199–220. doi:10.1147/rd.521.0199 Google Scholar
  10. 10.
    Bright A, Ellavsky M, Gara A, Haring R, Kopcsay G, Lembach R, Marcella J, Ohmacht M, Salapura V (2005) Creating the BlueGene/L supercomputer from low-power SoC ASICs. In: Proceedings of IEEE international solid-state circuits conference. doi:10.1109/ISSCC.2005.1493932 Google Scholar
  11. 11.
    Ware M, Rajamani K, Floyd M, Brock B, Rubio J, Rawson F, Carter J (2010) Architecting for power management: the IBM POWER7 approach. In: 2010 IEEE 16th international symposium on high performance computer architecture (HPCA). doi:10.1109/HPCA.2010.5416627 Google Scholar
  12. 12.
    Floyd M, Ware M, Rajamani K, Gloekler T, Brock B, Bose P, Buyuktosunoglu A, Rubio J, Schubert B, Spruth B, Tierno J A, Pesantez L (2011) Adaptive energy-management features of the IBM POWER7 chip. IBM Journal of Research and Development 55(3). doi:10.1147/JRD.2011.2114250
  13. 13.
    Brochard L, Panda R, Vemuganti S (2010) Optimizing performance and energy of HPC applications on POWER7. Comput Sci Res Dev 25:135–140. doi:10.1007/s00450-010-0123-3 CrossRefGoogle Scholar
  14. 14.
    Iyer S, Barth J, Parries P, Norum J, Rice J, Logan L, Hoyniak D (2005) Embedded DRAM: technology platform for the Blue Gene/L chip. IBM J Res Dev 49(2/3):333–350. doi:10.1147/rd.492.0333 CrossRefGoogle Scholar
  15. 15.
    Baier H et al. (2010) QPACE: power-efficient parallel architecture based on IBM PowerXCell 8i. Comput Sci Res Dev 25:49–154. doi:10.1007/s00450-010-0122-4 CrossRefGoogle Scholar
  16. 16.
    Lakner G (2010) IBM system Blue Gene solution: Blue Gene/P system administration. IBM redbook SG24-7417-03. http://www.redbooks.ibm.com/abstracts/sg247417.html
  17. 17.
    Hennecke M (2010) Saving block history data in the Blue Gene/P database. IBM techdocs whitepaper WP101678. http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101678
  18. 18.
    LLview: graphical monitoring of LoadLeveler controlled cluster. http://www.fz-juelich.de/jsc/llview/
  19. 19.
    Frings W (2007) New features of the batch system monitoring tool LLview. ScicomP13, Garching, 20. July 2007. http://www.spscicomp.org/ScicomP13/Presentations/User/WolfgangFrings-MON.pdf
  20. 20.
    Nagel W, Weber M, Hoppe H-C, Solchenbach K (1996) VAMPIR: visualization and analysis of MPI resources. Supercomputer 12(1):69–80. http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.38.1615 Google Scholar
  21. 21.
    Knüpfer A, Brendel R, Brunst H, Mix H, Nagel W (2006) Introducing the open trace format (OTF). In: Computational science (ICCS 2006). Lecture notes in computer science, vol 3992/2006, pp 526–533. http://dx.doi.org/10.1007/11758525_71 CrossRefGoogle Scholar

Copyright information

© The Author(s) 2011

Authors and Affiliations

  • Michael Hennecke
    • 1
  • Wolfgang Frings
    • 2
  • Willi Homberg
    • 2
  • Anke Zitz
    • 2
  • Michael Knobloch
    • 2
  • Hans Böttiger
    • 3
  1. 1.IBM Deutschland GmbHDüsseldorfGermany
  2. 2.Forschungszentrum Jülich GmbHJülichGermany
  3. 3.IBM Deutschland Research & Development GmbHBöblingenGermany

Personalised recommendations