Beyond Exaflop Computing: Reaching the Frontiers of High Performance Computing

Conference paper

Abstract

High Performance Computing (HPC) has over the last years benefitted from a continuous increase in speed of processors and systems. Over time we have reached Megaflops, Gigaflops, Teraflops, and finally in 2010 Petaflops. The next step in the ongoing race for speed is the Exaflop. In the US and in Japan plans are made for systems that are supposed to reach one Exaflop. The timing is not yet clear but estimates are that sometime between 2018 and 2020 such a system might be available. While we debate how and when to install an Exaflop system discussions have started about what we have to expect beyond Exaflops. There is a growing group of people who have a pessimistic view on High Performance Computing assuming that the continuous development might come to an end. However, we should have a more pragmatic view. Facing a change in hardware development should not be seen as an excuse to ignore the potential for improvement in software.

Keywords

High Performance Computing Clock Frequency Sustained Performance Fast System High Performance Computer 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgements

The author would like to thank Hans Meuer and his team for providing extremely valuable insight into the development of High Performance Computing over the last 20 years by collecting information in the TOP 500 list.

References

  1. 1.
    Frederic Jackson Turner: The Significance of the Frontier in American History. Penguin Books (2008)Google Scholar
  2. 2.
    G.E. Moore: Cramming more components onto integrated circuits, Electronics, 38(8), 114–117 (1965)Google Scholar
  3. 3.
    Charles J. Murray: The Supermen – The Story of Seymour Cray and the Technical Wizards behind the Supercomputer, John Wiley & Sons (1997)Google Scholar
  4. 4.
  5. 5.
    TOP 500 List: www.top500.org (2012)
  6. 6.
  7. 7.
    T. Sterling, D. Savarese, B. Fryxell, K. Olson, D.J. Becker: Communication Overhead for Space Science Applications on the Beowulf Parallel Workstation, Proceedings of High Performance Distributed Computing (1995)Google Scholar
  8. 8.
    Takumi Maruyama, Tsuyoshi Motokurumada, Kuniki Morita, Naozumi Aoki: Past, Present, and Future of SPARC64 Processors, FUJITSU Sci. Tech. J., Vol. 47, No. 2, pp. 130–135 (2011)Google Scholar
  9. 9.
  10. 10.
    ITRS: International Technology Roadmap for Semiconductors 2011 Edition http://www.itrs.net/Links/2011ITRS/Home2011.htm
  11. 11.
    Michael M. Resch: Trends in Architectures and Methods for High Performance Computing Simulation, in B.H.V. Topping and P. Ivanyi (Eds.), Parallel Distributed and Grid Computing for Engineering, pp 37–48, Saxe-Coburg Publications, Stirlingshire, Scotland (2009)Google Scholar
  12. 12.
  13. 13.
  14. 14.
    G. R. Liu, On Future Computational Methods for Exascale Computers, iacm expressions, 30, 8–10, December 2011 (2011)Google Scholar
  15. 15.
    Peter Kogge: Next-Generation Supercomputers, IEEE Spectrum, February 2011 (2011)Google Scholar
  16. 16.
    Richard Vuduc: A Theory for Co-Designing Algorithms and Architctures under Power & Chip-Area Constraints, ISC 2012, Hamburg, Germany (2012)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  1. 1.Höchstleistungsrechenzentrum Stuttgart (HLRS)University of StuttgartStuttgartGermany

Personalised recommendations