Advertisement

HPC Benchmarking: Scaling Right and Looking Beyond the Average

  • Milan RadulovicEmail author
  • Kazi Asifuzzaman
  • Paul Carpenter
  • Petar Radojković
  • Eduard Ayguadé
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11014)

Abstract

Designing a balanced HPC system requires an understanding of the dominant performance bottlenecks. There is as yet no well established methodology for a unified evaluation of HPC systems and workloads that quantifies the main performance bottlenecks. In this paper, we execute seven production HPC applications on a production HPC platform, and analyse the key performance bottlenecks: FLOPS performance and memory bandwidth congestion, and the implications on scaling out. We show that the results depend significantly on the number of execution processes and granularity of measurements. We therefore advocate for guidance in the application suites, on selecting the representative scale of the experiments. Also, we propose that the FLOPS performance and memory bandwidth should be represented in terms of the proportions of time with low, moderate and severe utilization. We show that this gives much more precise and actionable evidence than the average.

Keywords

HPC applications Bottlenecks FLOPS Memory bandwidth Scaling-out 

Notes

Acknowledgements

This work was supported by the Spanish Ministry of Science and Technology (project TIN2015-65316-P), Generalitat de Catalunya (contracts 2014-SGR-1051 and 2014-SGR-1272), Severo Ochoa Programme (SEV-2015-0493) of the Spanish Government; and the European Union’s Horizon 2020 research and innovation programme under ExaNoDe project (grant agreement No 671578).

References

  1. 1.
    Bailey, D.H.: Misleading performance claims in parallel computations. In: 2009 46th ACM/IEEE Design Automation Conference, pp. 528–533, July 2009.  https://doi.org/10.1145/1629911.1630049
  2. 2.
    Bailey, D.H.: Twelve ways to fool the masses when giving performance results on parallel computers. In: Supercomputing Review, pp. 54–55, August 1991Google Scholar
  3. 3.
    Barcelona Supercomputing Center: MareNostrum III System Architecture (2013). http://www.bsc.es/marenostrum-support-services/mn3
  4. 4.
    Barcelona Supercomputing Center: Extrae User guide manual for version 3.1.0, May 2015Google Scholar
  5. 5.
    Dongarra, J., Heroux, M., Luszczek, P.: The HPCG Benchmark (2016). http://www.hpcg-benchmark.org
  6. 6.
    Heroux, M., Dongarra, J.: Toward a New Metric for Ranking High Performance Computing Systems. Technical report SAND2013-4744, UTK EECS and Sandia National Labs, June 2013Google Scholar
  7. 7.
    Hoefler, T., Belli, R.: Scientific benchmarking of parallel computing systems: twelve ways to tell the masses when reporting performance results. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 73:1–73:12, November 2015Google Scholar
  8. 8.
    Home page of the EuroBen Benchmark. http://www.euroben.nl
  9. 9.
    Intel Corporation: Intel\(^{\textregistered }\)Xeon\(^{\textregistered }\)Processor E5–2600 Product Family Uncore Performance Monitoring Guide. Technical report, March 2012Google Scholar
  10. 10.
    Intel Corporation: Intel\(^{\textregistered }\)64 and IA-32 Architectures Software Developer’s Manual. Technical report, July 2017Google Scholar
  11. 11.
    Jacob, B.L.: The memory system: you can’t avoid it, you can’t ignore it, you can’t fake it. Synth. Lect. Comput. Archit. 4(1), 1–77 (2009)CrossRefGoogle Scholar
  12. 12.
    Kramer, W.T.: Top500 versus sustained performance: the top problems with the Top500 list - and what to do about them. In: Proceedings of the 21st International Conference on Parallel Architectures and Compilation Techniques, pp. 223–230, September 2012Google Scholar
  13. 13.
    Luszczek, P.R., et al.: The HPC Challenge (HPCC) Benchmark Suite. In: Proceedings of the ACM/IEEE Conference on Supercomputing (2006)Google Scholar
  14. 14.
    Marjanović, V., Gracia, J., Glass, C.W.: HPC benchmarking: problem size matters. In: Proceedings of the 7th International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computing Systems, pp. 1–10, November 2016Google Scholar
  15. 15.
    National Center for Atmospheric Research: CISL High Performance Computing Benchmarks. http://www2.cisl.ucar.edu/resources/computational-systems/cisl-high-performance-computing-benchmarks
  16. 16.
    National Energy Research Scientific Computing Center: NERSC-8 / Trinity Benchmarks. http://www.nersc.gov/users/computational-systems/cori/nersc-8-procurement/trinity-nersc-8-rfp/nersc-8-trinity-benchmarks
  17. 17.
    National Science Foundation: Benchmarking Information Referenced in the NSF 11–511 High Performance Computing System Acquisition: Towards a Petascale Computing Environment for Science and Engineering. https://www.nsf.gov/pubs/2006/nsf0605/nsf0605.pdf
  18. 18.
    Partnership for Advanced Computing in Europe (PRACE): Unified european applications benchmark suite (2013). www.prace-ri.eu/ueabs/
  19. 19.
    Pavlovic, M., Radulovic, M., Ramirez, A., Radojković, P.: Limpio: LIghtweight MPI instrumentatiOn. In: Proceedings of the 23rd IEEE International Conference on Program Comprehension, pp. 303–306 (2015)Google Scholar
  20. 20.
    Petitet, A., Whaley, R.C., Dongarra, J., Cleary, A.: HPL - A Portable Implementation of the High-Performance Linpack Benchmark for Distributed-Memory Computers, September 2008. http://www.netlib.org/benchmark/hpl/
  21. 21.
    PRACE Research Infrastructure. www.prace-ri.eu
  22. 22.
    Radulovic, M., et al.: Another trip to the wall: how much will stacked DRAM benefit HPC? In: Proceedings of the International Symposium on Memory Systems, pp. 31–36 (2015)Google Scholar
  23. 23.
    Sayeed, M., Bae, H., Zheng, Y., Armstrong, B., Eigenmann, R., Saied, F.: Measuring high-performance computing with real applications. Comput. Sci. Eng. 10(4), 60–70 (2008).  https://doi.org/10.1109/MCSE.2008.98CrossRefGoogle Scholar
  24. 24.
    TOP500 List, November 2014. http://www.top500.org/
  25. 25.
    Turner, A.: UK National HPC Benchmarks. Technical report, UK National Supercomputing Service ARCHER (2016). http://www.archer.ac.uk/documentation/white-papers/benchmarks/UK_National_HPC_Benchmarks.pdf
  26. 26.
    Zivanovic, D., et al.: Main memory in HPC: do we need more or could we live with less? ACM Trans. Archit. Code Optim. 14(1), 3:1–3:26 (2017)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Milan Radulovic
    • 1
    • 2
    Email author
  • Kazi Asifuzzaman
    • 1
    • 2
  • Paul Carpenter
    • 1
  • Petar Radojković
    • 1
  • Eduard Ayguadé
    • 1
    • 2
  1. 1.Barcelona Supercomputing Center (BSC)BarcelonaSpain
  2. 2.Universitat Politècnica de Catalunya (UPC)BarcelonaSpain

Personalised recommendations