Skip to main content

A Unified Framework for Assessing Energy Efficiency of Machine Learning

Part of the Communications in Computer and Information Science book series (CCIS,volume 1752)


State-of-the-art machine learning (ML) systems show exceptional qualitative performance, but can also have a negative impact on society. With regard to global climate change, the question of resource consumption and sustainability becomes more and more urgent. The enormous energy footprint of single ML applications and experiments was recently investigated. However, environment-aware users require a unified framework to assess, compare, and report the efficiency and performance trade-off of different methods and models. In this work we propose novel efficiency aggregation, indexing, and rating procedures for ML applications. To this end, we devise a set of metrics that allow for a holistic view, taking both task type, abstract model, software, and hardware into account. As a result, ML systems become comparable even across different execution environments. Inspired by the EU’s energy label system, we also introduce a concept for visually communicating efficiency information to the public in a comprehensible way. We apply our methods to over 20 SOTA models on a range of hardware architectures, giving an overview of the modern ML efficiency landscape.


  • Energy efficiency
  • Sustainability
  • Resource-aware ML
  • Green AI
  • Trustworthy AI

This is a preview of subscription content, access via your institution.

Buying options

USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions


  1. 1.

  2. 2.

  3. 3.

  4. 4.


  1. Anthony, L.F.W., Kanding, B., Selvan, R.: Carbontracker: tracking and predicting the carbon footprint of training deep learning models. In: ICML Workshop on Challenges in Deploying and monitoring Machine Learning Systems (2020). arXiv:2007.03051

  2. Arnold, M., et al.: FactSheets: increasing trust in AI services through supplier’s declarations of conformity. IBM J. Res. Develop. 63, 6:1-6:13 (2019)

    CrossRef  Google Scholar 

  3. Bannink, T., et al.: Larq compute engine: Design, benchmark, and deploy state-of-the-art binarized neural networks (2020).

  4. Bender, E.M., Gebru, T., McMillan-Major, A., Shmitchell, S.: On the dangers of stochastic parrots: can language models be too big? In: Conference on Fairness, Accountability, and Transparency, pp. 610–623 (2021).

  5. Birhane, A., Kalluri, P., Card, D., Agnew, W., Dotan, R., Bao, M.: The values encoded in machine learning research (2021).

  6. Brundage, M., et al.: Toward trustworthy AI development: mechanisms for supporting verifiable claims (2020).

  7. Burkart, N., Huber, M.F.: A survey on the explainability of supervised machine learning. J. Artif. Intell. Res. (JAIR) 70, 245–317 (2021).

    CrossRef  MATH  Google Scholar 

  8. Buschjäger, S., Pfahler, L., Buss, J., Morik, K., Rhode, W.: On-site Gamma-Hadron separation with deep learning on FPGAs. In: Dong, Y., Mladenić, D., Saunders, C. (eds.) ECML PKDD 2020. LNCS (LNAI), vol. 12460, pp. 478–493. Springer, Cham (2021).

    CrossRef  Google Scholar 

  9. Chatila, R., et al.: Trustworthy AI, pp. 13–39 (2021).

  10. Cremers, A., et al.: Trustworthy use of artificial intelligence - priorities from a philosophical, ethical, legal, and technological viewpoint as a basis for certification of artificial intelligence (2019)

    Google Scholar 

  11. Elsayed, N., Maida, A.S., Bayoumi, M.: A review of quantum computer energy efficiency. In: Green Technologies Conference, pp. 1–3 (2019)

    Google Scholar 

  12. EU Ai HLEG: Assessment list for trustworthy artificial intelligence (ALTAI) for self-assessment (2020).

  13. European Commission: Commission delegated regulation (eu) 2019/2014 with regard to energy labelling of household washing machines and household washer-dryers (2019).

  14. García-Martín, E., Rodrigues, C.F., Riley, G., Grahn, H.: Estimation of energy consumption in machine learning. J. Parallel Distrib. Comput. 134, 75–88 (2019)

    CrossRef  Google Scholar 

  15. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition (2015).

  16. Henderson, P., et al.: Towards the systematic reporting of the energy and carbon footprints of machine learning (2020).

  17. Hendrycks, D., Dietterich, T.G.: Benchmarking neural network robustness to common corruptions and perturbations (2019).

  18. Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications (2017).

  19. Huang, X., et al.: A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability. Comput. Sci. Rev. 37, 100270 (2020).

    CrossRef  MATH  Google Scholar 

  20. Kadowaki, T., Nishimori, H.: Quantum annealing in the transverse Ising model. Phys. Rev. E 58(5), 5355 (1998)

    CrossRef  Google Scholar 

  21. Kourfali, A., Stroobandt, D.: In-circuit debugging with dynamic reconfiguration of FPGA interconnects. Trans. Reconfigurable Technol. Syst. 13(1), 1–29 (2020)

    CrossRef  Google Scholar 

  22. Mitchell, M., et al.: Model cards for model reporting. In: Conference on Fairness, Accountability, and Transparency, pp. 220–229 (2019).

  23. Morik, K., et al.: Yes we care! - certification for machine learning methods through the care label framework (2021).

  24. Mücke, S., Piatkowski, N., Morik, K.: Hardware acceleration of machine learning beyond linear algebra. In: Cellier, P., Driessens, K. (eds.) ECML PKDD 2019. CCIS, vol. 1167, pp. 342–347. Springer, Cham (2020).

    CrossRef  Google Scholar 

  25. Patterson, D., et al.: The carbon footprint of machine learning training will plateau, then shrink (2022).

  26. Patterson, D.A., et al.: Carbon emissions and large neural network training (2021).

  27. Rauber, J., Brendel, W., Bethge, M.: Foolbox: A Python toolbox to benchmark the robustness of machine learning models (2017).

  28. Schmidt, V., et al.: CodeCarbon: estimate and track carbon emissions from machine learning computing (2021).

  29. Schwartz, R., Dodge, J., Smith, N.A., Etzioni, O.: Green AI. Commun. ACM 63(12), 54–63 (2020).

  30. Seifert, C., Scherzinger, S., Wiese, L.: Towards generating consumer labels for machine learning models. In: International Conference on Cognitive Machine Intelligence, pp. 173–179 (2019).

  31. Strubell, E., Ganesh, A., McCallum, A.: Energy and policy considerations for deep learning in NLP (2019).

  32. Strubell, E., Ganesh, A., McCallum, A.: Energy and policy considerations for modern deep learning research. In: AAAI Conference on Artificial Intelligence, pp. 13693–13696 (2020)

    Google Scholar 

  33. Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: 36th International Conference on Machine Learning, pp. 6105–6114 (2019).

  34. Vanschoren, J., Van Rijn, J.N., Bischl, B., Torgo, L.: OpenML: networked science in machine learning. SIGKDD Explor. Newsl. 15(2), 49–60 (2014)

    CrossRef  Google Scholar 

  35. Warden, P., Situnayake, D.: Tiny ML: Machine Learning with Tensorflow Lite on Arduino and Ultra-Low-Power Microcontrollers. O’Reilly Media, Sebastopol (2019)

    Google Scholar 

  36. Zhuang, D., Zhang, X., Song, S.L., Hooker, S.: Randomness in neural network training: characterizing the impact of tooling (2021).

Download references


This research has been funded by the Federal Ministry of Education and Research of Germany and the state of North-Rhine Westphalia as part of the Lamarr-Institute for Machine Learning and Artificial Intelligence, LAMARR22B.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Raphael Fischer .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fischer, R., Jakobs, M., Mücke, S., Morik, K. (2023). A Unified Framework for Assessing Energy Efficiency of Machine Learning. In: Koprinska, I., et al. Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2022. Communications in Computer and Information Science, vol 1752. Springer, Cham.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-23617-4

  • Online ISBN: 978-3-031-23618-1

  • eBook Packages: Computer ScienceComputer Science (R0)