Abstract
Looking closely at the Top500 list of high-performance computers (HPC) in the world, it becomes clear that computing power is not the only number that has been growing in the last three decades. The amount of power required to operate such massive computing machines has been steadily increasing, earning HPC users a higher than usual carbon footprint. While the problem is well known in academia, the exact energy requirements of hardware, software and how to optimize it are hard to quantify. To tackle this issue, we need tools to understand the software and its relationship with power consumption in today’s high performance computers. With that in mind, we present perun, a Python package and command line interface to measure energy consumption based on hardware performance counters and selected physical measurement sensors. This enables accurate energy measurements on various scales of computing, from a single laptop to an MPI-distributed HPC application. We include an analysis of the discrepancies between these sensor readings and hardware performance counters, with particular focus on the power draw of the usually overlooked non-compute components such as memory. One of our major insights is their significant share of the total energy consumption. We have equally analyzed the runtime and energy overhead perun generates when monitoring common HPC applications, and found it to be minimal. Finally, an analysis on the accuracy of different measuring methodologies when applied at large scales is presented.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
References
Zhang, F., Bonart, H., Zirwes, T., Habisreuther, P., Bockhorn, H., Zarzalis, N.: Direct numerical simulation of chemically reacting flows with the public domain code OpenFOAM. In: Nagel, W.E., Kröner, D.H., Resch, M.M. (eds.) High Performance Computing in Science and Engineering 2014, pp. 221–236. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-10810-0_16 ISBN: 978-3-319-10810-0
Weiel, M., Götz, M., Klein, A., et al.: Dynamic particle swarm optimization of biomolecular simulation parameters with flexible objective functions. Nat. Mach. Intell. 3(8), 727–734 (2021). https://doi.org/10.1038/s42256-021-00366-3. ISSN: 2522-5839
Durr, S., Fodor, Z., Frison, J., et al.: Ab initio determination of light hadron masses. Science 322(5905), 1224–1227 (2008)
Strohmaier, E., Dongarra, J., Simon, H., et al.: TOP500 (1993). https://www.top500.org/. Accessed 20 Feb 2023
Patterson, D., Gonzalez, J., Hülzle, U., et al.: The carbon footprint of machine learning training will plateau, then shrink. Computer 55(7), 18–28 (2022). https://doi.org/10.1109/MC.2022.3148714. Conference Name: Computer, ISSN: 1558-0814
Feng, W.-C., Cameron, K.: The Green500 list: encouraging sustainable supercomputing. Computer 40(12), 50–55 (2007). https://doi.org/10.1109/MC.2007.445. Conference Name: Computer, ISSN: 1558-0814
Anthony, L.F.W., Kanding, B., Selvan, R.: Carbontracker: tracking and predicting the carbon footprint of training deep learning models (2020). arXiv: 2007.03051 [cs, eess, stat]
Henderson, P., Hu, J., Romoff, J., et al.: Towards the systematic reporting of the energy and carbon footprints of machine learning. J. Mach. Learn. Res. 21(248), 1–43 (2020). ISSN: 1533-7928
Schmidt, V., Goyal-Kamal, Courty, B., et al.: mlco2/codecarbon: v2.1.4 (2022). https://doi.org/10.5281/zenodo.7049269
Lacoste, A., Luccioni, A., Schmidt, V., et al.: Quantifying the carbon emissions of machine learning. arXiv preprint arXiv:1910.09700 (2019)
Lannelongue, L., Grealey, J., Inouye, M.: Green algorithms: quantifying the carbon footprint of computation. Adv. Sci. 8(12), 2100707 (2021). https://doi.org/10.1002/advs.202100707. ISSN: 2198-3844, 2198-3844
Caspart, R., et al.: Precise energy consumption measurements of heterogeneous artificial intelligence workloads. In: Anzt, H., Bienz, A., Luszczek, P., Baboulin, M. (eds.) ISC High Performance 2022. LNCS, vol. 13387, pp. 108–121. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-23220-6_8 ISBN: 978-3-031-23220-6
Hodak, M., Dholakia, A.: Recent efficiency gains in deep learning: performance, power, and sustainability. In: 2021 IEEE International Conference on Big Data (Big Data), pp. 2040–2045 (2021). https://doi.org/10.1109/BigData52589.2021.9671762
Strubell, E., Ganesh, A., McCallum, A.: Energy and policy considerations for deep learning in NLP (2019). https://doi.org/10.48550/arXiv.1906.02243. arXiv:1906.02243 [cs]
ISO/IEC 30134-2:2016. ISO (2016). https://www.iso.org/standard/63451.html. Accessed 09 Feb 2023
Brady, G.A., Kapur, N., Summers, J.L., et al.: A case study and critical assessment in calculating power usage effectiveness for a data centre. Energy Convers. Manag. 76, 155–161 (2013). https://doi.org/10.1016/J.ENCONMAN.2013.07.035
Patterson, D., Gonzalez, J., Le, Q., et al.: Carbon emissions and large neural network training (2021). https://doi.org/10.48550/arXiv.2104.10350. http://arxiv.org/abs/2104.10350 [cs]
Fedus, W., Zoph, B., Shazeer, N.: Switch transformers: scaling to trillion parameter models with simple and efficient sparsity (2022). arXiv:2101.03961 [cs]
Brown, T.B., Mann, B., Ryder, N., et al.: Language models are few-shot learners (2020). arXiv:2005.14165 [cs]
Chowdhery, A., Narang, S., Devlin, J., et al.: PaLM: scaling language modeling with pathways (2022). arXiv:2204.02311 [cs]
Wang, H., Li, Z., Zhao, X., He, Q., Sun, J.: Evaluating the energy consumption of InfiniBand switch based on time series. In: Wong, W.E., Zhu, T. (eds.) Computer Engineering and Networking. LNEE, vol. 277, pp. 963–970. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-01766-2_110 ISBN: 978-3-319-01766-2
NVIDIA system management interface, NVIDIA Developer (2012). https://developer.nvidia.com/nvidia-system-management-interface. Accessed 14 Feb 2023
Chanussot, L., Das, A., Goyal, S., et al.: The open catalyst 2020 (OC20) dataset and community challenges. ACS Catal. 11(10), 6059–6072 (2021). https://doi.org/10.1021/acscatal.0c04525. arXiv: 2010.09990 [cond-mat]. ISSN: 2155-5435, 2155-5435
Lawrence, A.: Data center PUEs flat since 2013. Uptime Institute Blog (2020). https://journal.uptimeinstitute.com/data-center-pues-flat-since-2013/. Accessed 31 Jan 2023
Miller, R.: Immersion cooling at scale: BitFury pushes density to 250kw per rack. Data Center Frontier (2015). https://www.datacenterfrontier.com/featured/article/11431449/immersion-cooling-at-scale-bitfury-pushes-density-to-250kw-per-rack. Accessed 31 Jan 2023
Black, F., Scholes, M.S.: The pricing of options and corporate liabilities. J. Polit. Econ. 81, 637–654 (1973)
Devlin, J., Chang, M.-W., Lee, K., et al.: BERT: pre-training of deep bidirectional transformers for language understanding (2019). http://arxiv.org/abs/1810.04805 [cs]
Farrell, S., Emani, M., Balma, J., et al.: MLPerf HPC: a holistic benchmark suite for scientific machine learning on HPC systems (2021). arXiv:2110.11466 [cs]
Kurth, T., Treichler, S., Romero, J., et al.: Exascale deep learning for climate analytics. In: SC 2018: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 649–660 (2018). https://doi.org/10.1109/SC.2018.00054
Ge, R., Feng, X., Pyla, H., et al.: Power measurement tutorial for the Green500 list (2007)
Acknowledgments
This work is supported by the Helmholtz project HiRSE_PS, the Helmholtz AI platform and the HAICORE@KIT grant.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Gutiérrez Hermosillo Muriedas, J.P., Flügel, K., Debus, C., Obermaier, H., Streit, A., Götz, M. (2023). perun: Benchmarking Energy Consumption of High-Performance Computing Applications. In: Cano, J., Dikaiakos, M.D., Papadopoulos, G.A., Pericàs, M., Sakellariou, R. (eds) Euro-Par 2023: Parallel Processing. Euro-Par 2023. Lecture Notes in Computer Science, vol 14100. Springer, Cham. https://doi.org/10.1007/978-3-031-39698-4_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-39698-4_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-39697-7
Online ISBN: 978-3-031-39698-4
eBook Packages: Computer ScienceComputer Science (R0)