Advertisement

Principles of Energy Efficiency in High Performance Computing

  • Axel Auweter
  • Arndt Bode
  • Matthias Brehm
  • Herbert Huber
  • Dieter Kranzlmüller
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6868)

Abstract

High Performance Computing (HPC) is a key technology for modern researchers enabling scientific advances through simulation where experiments are either technically impossible or financially not feasible to conduct and theory is not applicable. However, the high degree of computational power available from today’s supercomputers comes at the cost of large quantities of electrical energy being consumed.

This paper aims to give an overview of the current state of the art and future techniques to reduce the overall power consumption of HPC systems and sites. We believe that a holistic approach for monitoring and operation at all levels of a supercomputing site is necessary. Thus, we do not only concentrate on the possibility of improving the energy efficiency of the compute hardware itself, but also of site infrastructure components for power distribution and cooling. Since most of the energy consumed by supercomputers is converted into heat, we also outline possible technologies to re-use waste heat in order to increase the Power Usage Effectiveness (PUE) of the entire supercomputing site.

Keywords

High Performance Computing Energy Efficiency Power Usage Effectiveness HPC PUE 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Alfieri, F., Tiwari, M.K., Zinovik, I., Poulikakos, D., Brunschwiler, T., Michel, B.: 3D Integrated Water Cooling of a Composite Multilayer Stack of Chips. In: Proceedings of the 14th International Heat Transfer Conference (August 2010)Google Scholar
  2. 2.
    Atwood, D., Miner, J.G.: Reducing Data Center Cost with an Air Economizer. Technical report, Intel (August 2008), http://www.intel.com/it/pdf/reducing_data_center_cost_with_an_air_economizer.pdf
  3. 3.
    Bruschi, J., Rumsey, P., Anliker, R., Chu, L., Gregson, S.: Best Practices Guide for Energy-Efficient Data Center Design. Technical report, U.S. Department of Energy (February 2010), http://www1.eere.energy.gov/femp/pdfs/eedatacenterbestpractices.pdf
  4. 4.
    Maury, M.C., Blagojevic, F., Antonopoulos, C.D., Nikolopoulos, D.S.: Prediction-based power-performance adaptation of multithreaded scientific codes. IEEE Transactions on Parallel and Distributed Systems 19(10), 1396–1410 (2008)CrossRefGoogle Scholar
  5. 5.
    Maury, M.C., Dzierwa, J., Antonopoulos, C.D., Nikolopoulos, D.S.: Online power-performance adaptation of multithreaded programs using hardware event-based prediction. In: Proceedings of the 20th Annual International Conference on Supercomputing, ICS 2006, pp. 157–166. ACM, New York (2006)Google Scholar
  6. 6.
    Fichter, K., Clausen, J.: Energy-Efficient Data Centres - Best-Practice-Examples from Europe, the USA and Asia. Federal Ministry for the Environment Nature Conservation and Nuclear Safety (January 2010), http://www.bmu.de/english/energy_efficiency/doc/45749.php
  7. 7.
    Freeh, V.W., Pan, F., Kappiah, N., Lowenthal, D.K., Springer, R.: Exploring the energy-time tradeoff in mpi programs on a power-scalable cluster. In: International Parallel and Distributed Processing Symposium, vol. 1, p. 4a (2005)Google Scholar
  8. 8.
    Greenberg, S., Mills, E., Tschudi, B., Rumsey, P., Myatt, B.: Best Practices for Data Centers: Lessons Learned from Benchmarking 22 Data Centers. In: Proceedings of the 2006 ACEEE Summer Study on Energy Efficiency in Buildings (2006)Google Scholar
  9. 9.
    Hacker, H., Trinitis, C., Weidendorfer, J., Brehm, M.: Considering gpgpu for hpc centers: Is it worth the effort? (2011)Google Scholar
  10. 10.
    Klug, T., Ott, M., Weidendorfer, J., Trinitis, C.: Autopin – automated optimization of thread-to-core pinning on multicore systems (2011)Google Scholar
  11. 11.
    Malone, C., Belady, C.: Metrics to Characterize Data Center & IT Equipment Energy Use. In: Proceedings of the Digital Power Forum (September 2006)Google Scholar
  12. 12.
    Moore, G.E.: Cramming more components onto integrated circuits. Electronics 38(8) ( April 1965)Google Scholar
  13. 13.
    Núñez, T.: Thermally Driven Cooling: Technologies, Developments and Applications. Journal of Sustainable Energy 1(4) (December 2010)Google Scholar
  14. 14.
    Springer, R., Lowenthal, D.K., Rountree, B., Freeh, V.W.: Minimizing execution time in mpi programs on an energy-constrained, power-scalable cluster. In: Proceedings of the eleventh ACM SIGPLAN symposium on Principles and practice of parallel programming. PPoPP 2006, pp. 230–238. ACM, New York (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Axel Auweter
    • 1
  • Arndt Bode
    • 1
  • Matthias Brehm
    • 1
  • Herbert Huber
    • 1
  • Dieter Kranzlmüller
    • 1
  1. 1.Leibniz Supercomputing Centre (LRZ) of the Bavarian Academy of Sciences and HumanitiesGarching bei MünchenGermany

Personalised recommendations