Advertisement

Computer Science - Research and Development

, Volume 27, Issue 4, pp 337–345 | Cite as

Towards an energy-aware scientific I/O interface

Stretching the ADIOS interface to foster performance analysis and energy awareness
  • Julian M. Kunkel
  • Timo Minartz
  • Michael Kuhn
  • Thomas Ludwig
Special Issue Paper

Abstract

Intelligently switching energy saving modes of CPUs, NICs and disks is mandatory to reduce the energy consumption.

Hardware and operating system have a limited perspective of future performance demands, thus automatic control is suboptimal. However, it is tedious for a developer to control the hardware by himself.

In this paper we propose an extension of an existing I/O interface which on the one hand is easy to use and on the other hand could steer energy saving modes more efficiently. Furthermore, the proposed modifications are beneficial for performance analysis and provide even more information to the I/O library to improve performance.

When a user annotates the program with the proposed interface, I/O, communication and computation phases are labeled by the developer. Run-time behavior is then characterized for each phase, this knowledge could be then exploited by the new library.

Keywords

Scientific I/O API Energy efficiency ADIOS Performance analysis Performance optimization 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Burtscher M, Kim BD, Diamond J, McCalpin J, Koesterke L, Browne J (2010) Perfexpert: An easy-to-use performance diagnosis tool for HPC applications. In: Proceedings of the 2010 ACM/IEEE international conference for high performance computing, networking, storage and analysis, SC ’10. IEEE Computer Society, Washington, DC, pp 1–11. doi: 10.1109/SC.2010.41 CrossRefGoogle Scholar
  2. 2.
    Freeh V, Lowenthal D, Pan F, Kappiah N, Springer R, Rountree B, Femal M (2007) Analyzing the energy-time trade-off in high-performance computing applications. IEEE Trans Parallel Distrib Syst 8:1575–1590 Google Scholar
  3. 3.
    Freeh VW, Lowenthal DK (2005) Using multiple energy gears in MPI programs on a power-scalable cluster. In: PPoPP ’05: Proceedings of the tenth ACM SIGPLAN symposium on principles and practice of parallel programming. ACM, New York, pp 164–173. doi: 10.1145/1065944.1065967 CrossRefGoogle Scholar
  4. 4.
    Geimer M, Wolf F, Wylie BJN, Abraham E, Becker D, Mohr B (2010) The Scalasca performance toolset architecture. Concurr Comput 22(6):277–288 Google Scholar
  5. 5.
    Gerndt M, Ott M (2010) Automatic performance analysis with periscope. Concurr Comput 22:736–748. doi: 10.1002/cpe.v22:6 Google Scholar
  6. 6.
    Hotta Y, Sato M, Kimura H, Matsuoka S, Boku T, Takahashi D (2006) Profile-based optimization of power performance by using dynamic voltage scaling on a PC cluster. In: IPDPS ’06: proceedings of the 20th international parallel and distributed processing symposium (2006). doi: 10.1109/IPDPS.2006.1639597 Google Scholar
  7. 7.
    Hsu CH, Feng WC (2005) A power-aware run-time system for high-performance computing. In: SC ’05: proceedings of the 2005 ACM/IEEE conference on Supercomputing. IEEE Computer Society, Washington, pp 1. doi: 10.1109/SC.2005.3 Google Scholar
  8. 8.
    Huang S, Feng W (2009) Energy-efficient cluster computing via accurate workload characterization. In: CCGRID ’09: proceedings of the 2009 9th IEEE/ACM international symposium on cluster computing and the grid. IEEE Computer Society, Washington, pp 68–75. doi: 10.1109/CCGRID.2009.88 Google Scholar
  9. 9.
    Knüpfer A, Brunst H, Doleschal J, Jurenz M, Lieber M, Mickler H, Müller MS, Nagel WE (2008) The Vampir performance analysis tool-set. In: Tools for high performance computing, proceedings of the 2nd international workshop on parallel tools. Springer, Berlin, pp 139–155 Google Scholar
  10. 10.
    Lofstead J, Klasky SKS, Podhorszki N, Jin C (2008) Flexible IO and integration for scientific codes through the adaptable IO system (ADIOS). http://www.adiosapi.org/uploads/clade110-lofstead.pdf
  11. 11.
    Lofstead J, Zheng F, Klasky S, Schwan K (2009) Adaptable, metadata rich IO methods for portable high performance IO. In: Proceedings of IPDPS’09, May 25–29, Rome, Italy. Springer, Berlin Google Scholar
  12. 12.
    Minartz T, Knobloch M, Ludwig T, Mohr B (2011, will be published) Managing hardware power saving modes for high performance computing Google Scholar
  13. 13.
    Minartz T, Kunkel J, Ludwig T (2010) Simulation of power consumption of energy efficient cluster hardware. Comput Sci Res Dev 25:165–175. doi: 10.1007/s00450-010-0120-6 CrossRefGoogle Scholar
  14. 14.
    Minartz T, Molka D, Knobloch M, Krempel S, Ludwig T, Nagel W, Mohr B, Falter H (2011, will be published) eeClust—Energy-efficient cluster computing Google Scholar
  15. 15.
    Minh TN, Wolters L (2010) Using historical data to predict application runtimes on backfilling parallel systems. In: Euromicro conference on parallel, distributed, and network-based processing, pp 246–252. http://doi.ieeecomputersociety.org/10.1109/PDP.2010.18 CrossRefGoogle Scholar
  16. 16.
    Rountree B, Lowenthal DK, Funk S, Freeh VW, de Supinski BR, Schulz M (2007) Bounding energy consumption in large-scale MPI programs. In: SC ’07: proceedings of the 2007 ACM/IEEE conference on supercomputing. ACM, New York, pp 1–9. http://doi.acm.org/10.1145/1362622.1362688 CrossRefGoogle Scholar
  17. 17.
    Shende SS, Malony AD (2006) The tau parallel performance system. Int J High Perform Comput Appl 20(2):287–311. http://doi.acm.org/10.1007/s00450-011-0193-x CrossRefGoogle Scholar
  18. 18.
    Smith W, Foster IT, Taylor VE (1998) Predicting application run times using historical information. In: Proceedings of the workshop on job scheduling strategies for parallel processing. Springer, London, pp 122–142. http://portal.acm.org/citation.cfm?id=646379.689526 CrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2011

Authors and Affiliations

  • Julian M. Kunkel
    • 1
  • Timo Minartz
    • 1
  • Michael Kuhn
    • 1
  • Thomas Ludwig
    • 2
  1. 1.Department of InformaticsUniversity of HamburgHamburgGermany
  2. 2.DKRZ GmbH & Department of InformaticsUniversity of HamburgHamburgGermany

Personalised recommendations