Abstract
State-of-the-art machine learning (ML) systems show exceptional qualitative performance, but can also have a negative impact on society. With regard to global climate change, the question of resource consumption and sustainability becomes more and more urgent. The enormous energy footprint of single ML applications and experiments was recently investigated. However, environment-aware users require a unified framework to assess, compare, and report the efficiency and performance trade-off of different methods and models. In this work we propose novel efficiency aggregation, indexing, and rating procedures for ML applications. To this end, we devise a set of metrics that allow for a holistic view, taking both task type, abstract model, software, and hardware into account. As a result, ML systems become comparable even across different execution environments. Inspired by the EU’s energy label system, we also introduce a concept for visually communicating efficiency information to the public in a comprehensible way. We apply our methods to over 20 SOTA models on a range of hardware architectures, giving an overview of the modern ML efficiency landscape.
Keywords
- Energy efficiency
- Sustainability
- Resource-aware ML
- Green AI
- Trustworthy AI
This is a preview of subscription content, access via your institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Anthony, L.F.W., Kanding, B., Selvan, R.: Carbontracker: tracking and predicting the carbon footprint of training deep learning models. In: ICML Workshop on Challenges in Deploying and monitoring Machine Learning Systems (2020). arXiv:2007.03051
Arnold, M., et al.: FactSheets: increasing trust in AI services through supplier’s declarations of conformity. IBM J. Res. Develop. 63, 6:1-6:13 (2019)
Bannink, T., et al.: Larq compute engine: Design, benchmark, and deploy state-of-the-art binarized neural networks (2020). https://arxiv.org/abs/2011.09398
Bender, E.M., Gebru, T., McMillan-Major, A., Shmitchell, S.: On the dangers of stochastic parrots: can language models be too big? In: Conference on Fairness, Accountability, and Transparency, pp. 610–623 (2021). https://doi.org/10.1145/3442188.3445922
Birhane, A., Kalluri, P., Card, D., Agnew, W., Dotan, R., Bao, M.: The values encoded in machine learning research (2021). https://arxiv.org/abs/2106.15590
Brundage, M., et al.: Toward trustworthy AI development: mechanisms for supporting verifiable claims (2020). https://arxiv.org/abs/2004.07213
Burkart, N., Huber, M.F.: A survey on the explainability of supervised machine learning. J. Artif. Intell. Res. (JAIR) 70, 245–317 (2021). https://doi.org/10.1613/jair.1.12228
Buschjäger, S., Pfahler, L., Buss, J., Morik, K., Rhode, W.: On-site Gamma-Hadron separation with deep learning on FPGAs. In: Dong, Y., Mladenić, D., Saunders, C. (eds.) ECML PKDD 2020. LNCS (LNAI), vol. 12460, pp. 478–493. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-67667-4_29
Chatila, R., et al.: Trustworthy AI, pp. 13–39 (2021). https://doi.org/10.1007/978-3-030-69128-8_2
Cremers, A., et al.: Trustworthy use of artificial intelligence - priorities from a philosophical, ethical, legal, and technological viewpoint as a basis for certification of artificial intelligence (2019)
Elsayed, N., Maida, A.S., Bayoumi, M.: A review of quantum computer energy efficiency. In: Green Technologies Conference, pp. 1–3 (2019)
EU Ai HLEG: Assessment list for trustworthy artificial intelligence (ALTAI) for self-assessment (2020). https://futurium.ec.europa.eu/en/european-ai-alliance/pages/altai-assessment-list-trustworthy-artificial-intelligence
European Commission: Commission delegated regulation (eu) 2019/2014 with regard to energy labelling of household washing machines and household washer-dryers (2019). https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:32019R2014
García-Martín, E., Rodrigues, C.F., Riley, G., Grahn, H.: Estimation of energy consumption in machine learning. J. Parallel Distrib. Comput. 134, 75–88 (2019)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition (2015). http://arxiv.org/abs/1512.03385
Henderson, P., et al.: Towards the systematic reporting of the energy and carbon footprints of machine learning (2020). https://arxiv.org/abs/2002.05651
Hendrycks, D., Dietterich, T.G.: Benchmarking neural network robustness to common corruptions and perturbations (2019). http://arxiv.org/abs/1903.12261
Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications (2017). http://arxiv.org/abs/1704.04861
Huang, X., et al.: A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability. Comput. Sci. Rev. 37, 100270 (2020). https://doi.org/10.1016/j.cosrev.2020.100270
Kadowaki, T., Nishimori, H.: Quantum annealing in the transverse Ising model. Phys. Rev. E 58(5), 5355 (1998)
Kourfali, A., Stroobandt, D.: In-circuit debugging with dynamic reconfiguration of FPGA interconnects. Trans. Reconfigurable Technol. Syst. 13(1), 1–29 (2020)
Mitchell, M., et al.: Model cards for model reporting. In: Conference on Fairness, Accountability, and Transparency, pp. 220–229 (2019). https://dl.acm.org/doi/abs/10.1145/3287560.3287596
Morik, K., et al.: Yes we care! - certification for machine learning methods through the care label framework (2021). https://arxiv.org/abs/2105.10197
Mücke, S., Piatkowski, N., Morik, K.: Hardware acceleration of machine learning beyond linear algebra. In: Cellier, P., Driessens, K. (eds.) ECML PKDD 2019. CCIS, vol. 1167, pp. 342–347. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-43823-4_29
Patterson, D., et al.: The carbon footprint of machine learning training will plateau, then shrink (2022). https://arxiv.org/abs/2204.05149
Patterson, D.A., et al.: Carbon emissions and large neural network training (2021). https://arxiv.org/abs/2104.10350
Rauber, J., Brendel, W., Bethge, M.: Foolbox: A Python toolbox to benchmark the robustness of machine learning models (2017). https://arxiv.org/abs/1707.04131
Schmidt, V., et al.: CodeCarbon: estimate and track carbon emissions from machine learning computing (2021). https://github.com/mlco2/codecarbon
Schwartz, R., Dodge, J., Smith, N.A., Etzioni, O.: Green AI. Commun. ACM 63(12), 54–63 (2020). https://doi.org/10.1145/3381831
Seifert, C., Scherzinger, S., Wiese, L.: Towards generating consumer labels for machine learning models. In: International Conference on Cognitive Machine Intelligence, pp. 173–179 (2019). https://doi.org/10.1109/CogMI48466.2019.00033
Strubell, E., Ganesh, A., McCallum, A.: Energy and policy considerations for deep learning in NLP (2019). http://arxiv.org/abs/1906.02243
Strubell, E., Ganesh, A., McCallum, A.: Energy and policy considerations for modern deep learning research. In: AAAI Conference on Artificial Intelligence, pp. 13693–13696 (2020)
Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: 36th International Conference on Machine Learning, pp. 6105–6114 (2019). https://proceedings.mlr.press/v97/tan19a.html
Vanschoren, J., Van Rijn, J.N., Bischl, B., Torgo, L.: OpenML: networked science in machine learning. SIGKDD Explor. Newsl. 15(2), 49–60 (2014)
Warden, P., Situnayake, D.: Tiny ML: Machine Learning with Tensorflow Lite on Arduino and Ultra-Low-Power Microcontrollers. O’Reilly Media, Sebastopol (2019)
Zhuang, D., Zhang, X., Song, S.L., Hooker, S.: Randomness in neural network training: characterizing the impact of tooling (2021). https://arxiv.org/abs/2106.11872
Acknowledgement
This research has been funded by the Federal Ministry of Education and Research of Germany and the state of North-Rhine Westphalia as part of the Lamarr-Institute for Machine Learning and Artificial Intelligence, LAMARR22B.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Fischer, R., Jakobs, M., Mücke, S., Morik, K. (2023). A Unified Framework for Assessing Energy Efficiency of Machine Learning. In: Koprinska, I., et al. Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2022. Communications in Computer and Information Science, vol 1752. Springer, Cham. https://doi.org/10.1007/978-3-031-23618-1_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-23618-1_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-23617-4
Online ISBN: 978-3-031-23618-1
eBook Packages: Computer ScienceComputer Science (R0)