The Impact of Injection Bandwidth Performance on Application Scalability

  • Kevin T. Pedretti
  • Ron Brightwell
  • Doug Doerfler
  • K. Scott Hemmert
  • James H. LarosIII
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6960)


Future exascale systems are expected to have significantly reduced network bandwidth relative to computational performance than current systems. Clearly, this will impact bandwidth-intensive applications, so it is important to gain insight into the magnitude of the negative impact on performance and scalability to help identify mitigation strategies. In this paper, we show how current systems can be configured to emulate the expected imbalance of future systems. We demonstrate this approach by reducing the network injection bandwidth performance of a 160-node, 1920-core Cray XT5 system and analyze the performance and scalability of a suite of MPI benchmarks and applications.


bandwidth configurability benchmarking exascale co-design 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Alam, S.R., Vetter, J.S.: An analysis of system balance requirements for scientific applications. In: ICPP 2006: Proceedings of the International Conference on Parallel Processing (2006)Google Scholar
  2. 2.
    Alvin, K., Barrett, B., Brightwell, R., Dosanjh, S., Geist, A., Hemmert, S., Heroux, M., Kothe, D., Murphy, R., Nichols, J., Oldfield, R., Rodrigues, A., Vetter, J.: On the Path to Exascale. International Journal of Distributed Systems and Technologies 1(2), 1–22 (2010)CrossRefGoogle Scholar
  3. 3.
    Ang, J.A., Barnette, D., Benner, B., Goudy, S., Malins, B., Rajan, M., Vaughan, C.: Supercomputer and cluster performance modeling and analysis efforts: 2004-2006. Tech. rep., Sandia National Laboratories Technical Report, SAND2007-0601 (2007)Google Scholar
  4. 4.
    Brightwell, R., Hudson, T., Pedretti, K., Underwood, K.D.: SeaStar Interconnect: Balanced bandwidth for Scalable Performance. IEEE Micro 26(3), 41–57 (2006)CrossRefGoogle Scholar
  5. 5.
    Hertel Jr., E.S., Bell, R., Elrick, M., Farnsworth, A., Kerley, G., McGlaun, J., Petney, S., Silling, S., Taylor, P., Yarrington, L.: CTH: A Software Family for Multi-Dimensional Shock Physics Analysis. In: Proceedings of the International Symposium on Shock Waves, pp. 377–382 (July 1993)Google Scholar
  6. 6.
    Ferreira, K.B., Bridges, P., Brightwell, R.: Characterizing application sensitivity to OS interference using kernel-level noise injection. In: SC 2008: Proceedings of the International Conference on High-Performance Computing, Networking, Storage, and Analysis (November 2008)Google Scholar
  7. 7.
    Gittings, M., Weaver, R., Clover, M., Betlach, T., Byrne, N., Coker, R., Dendy, E., Hueckstaedt, R., New, K., Oakes, W.R., Ranta, D., Stefan, R.: The rage radiation-hydrodynamic code. Computational Science & Discovery 1(1), 015005 (2008)CrossRefGoogle Scholar
  8. 8.
    Hoefler, T., Schneider, T., Lumsdaine, A.: Characterizing the Influence of System Noise on Large-Scale Applications by Simulation. In: SC 2010: Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis (November 2010)Google Scholar
  9. 9.
    Hoisie, A., Johnson, G., Kerbyson, D.J., Lang, M., Pakin, S.: A performance comparison through benchmarking and modeling of three leading supercomputers: Blue Gene/L, Red Storm, and Purple. In: SC 2006: Proceedings of the International Conference on High-Performance Computing, Networking, Storage, and Analysis (November 2006)Google Scholar
  10. 10.
    Kogge, P.M., et al.: Exascale computing study: Technology challenges in achieving exascale systems. Tech. rep., University of Notre Dame CSE Department Technical Report, TR-2008-13 (September 2008)Google Scholar
  11. 11.
    Lin, P.T., Shadid, J.N., Sala, M., Tuminaro, R.S., Hennigan, G.L., Hoekstra, R.J.: Performance of a parallel algebraic multilevel preconditioner for stabilized finite element semiconductor device modeling. Journal of Computational Physics 228, 6250–6267 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Luszczek, P., Dongarra, J., Koester, D., Rabenseifner, R., Lucas, B., Kepner, J., Mccalpin, J., Bailey, D., Takahashi, D.: Introduction to the HPC Challenge (HPCC) benchmark suite. Technical Report (March 2005),
  13. 13.
    Rodrigues, A.: Programming Future Architectures: Dusty Decks, Memory Walls, and the Speed of Light, ch. 3, pp. 56–81. University of Notre Dame (2006)Google Scholar
  14. 14.
    Weaver, R., Gittings, M.: Massively parallel simulations with DOE’s ASCI supercomputers: An overview of the Los Alamos Crestone project. In: Adaptive Mesh Refinement - Theory and Applications. Lecture Notes in Computational Science and Engineering, vol. 41, pp. 29–56. Springer, Heidelberg (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Kevin T. Pedretti
    • 1
  • Ron Brightwell
    • 1
  • Doug Doerfler
    • 1
  • K. Scott Hemmert
    • 1
  • James H. LarosIII
    • 1
  1. 1.Sandia National LaboratoriesAlbuquerqueUSA

Personalised recommendations