Skip to main content

Deep Learning for Power and Switching Activity Estimation

  • Chapter
  • First Online:
Machine Learning Applications in Electronic Design Automation
  • 1755 Accesses

Abstract

This chapter covers the topics related to achieving efficient switching activity estimation, in terms of speed and accuracy, which in turn provide efficient power estimation. It first introduces the topic of power estimation, switching activity estimation, and their role in VLSI design. The chapter describes conventional tools to perform power estimation: gate-level simulation and switching activity estimators, as well as some drawbacks that these conventional tools and methods pose. Then, the chapter delves into tried-and-true modeling methods in the past to alleviate some of the conventional methods’ drawbacks. An overview of the new trend of using deep learning models and techniques for power estimation is given, before a more in-depth treatment of the topic is shown using two case studies. The two case studies focus on state-of-the-art deep learning models such as convolutional neural networks and graph neural networks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Yang, T.-J., Chen, Y.-H., Emer, J., Sze, V.: A method to estimate the energy consumption of deep neural networks. In: 2017 51st Asilomar Conference on Signals, Systems, and Computers, pp. 1916–1920. IEEE, Piscataway (2017)

    Google Scholar 

  2. “PrimePower” https://www.synopsys.com/implementation-and-signoff/signoff/primepower.html

  3. Nourani, M., Nazarian, S., Afzali-Kusha, A.: A parallel algorithm for power estimation at gate level. In: MWSCAS, pp. I–511 (2002)

    Google Scholar 

  4. Mehta, H., Borah, M., Owens, R.M., Irwin, M.J.: Accurate estimation of combinational circuit activity. In: Proceedings of the 32nd annual ACM/IEEE Design Automation Conference (DAC ’95)

    Google Scholar 

  5. Najm, F.N.: Transition density: a new measure of activity in digital circuits. IEEE Trans. Comput. Aided Desig. Integr. Circuits Syst. 12(2), 310–323 (1993). https://doi.org/10.1109/43.205010

    Article  Google Scholar 

  6. Kurian, G., Neuman, S.M., Bezerra, G., Giovinazzo, A., Devadas, S., Miller, J.E.: Power modeling and other new features in the graphite simulator. In: 2014 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), pp. 132–134 (2014). https://doi.org/10.1109/ISPASS.2014.6844471

  7. Li, S., Ahn, J.H., Strong, R.D., Brockman, J.B., Tullsen, D.M., Jouppi, N.P.: McPAT: an integrated power, area, and timing modeling framework for multicore and manycore architectures. In: 2009 42nd Annual IEEE/ACM International Symposium on Microarchitec-ture (MICRO), pp. 469–480 (2009)

    Google Scholar 

  8. Kim, D., et al.: Strober: fast and accurate sample-based energy simulation for arbitrary RTL. In: 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA), pp. 128–139 (2016). https://doi.org/10.1109/ISCA.2016.21

  9. Shao, Y.S., Reagen, B., Wei, G., Brooks, D.: The aladdin approach to accelerator design and modeling. IEEE Micro. 35(3), 58–70 (2015). https://doi.org/10.1109/MM.2015.50

    Article  Google Scholar 

  10. Ravi, S., Raghunathan, A., Chakradhar, S.: Efficient RTL power estimation for large designs. In: Proceedings of the 16th International Conference on VLSI Design, pp. 431–439 (2003). https://doi.org/10.1109/ICVD.2003.1183173

  11. Sunwoo, D., Wu, G.Y., Patil, N.A., Chiou, D.: PrEsto: an FPGA-accelerated power estimation methodology for complex systems. In: 2010 International Conference on Field Programmable Logic and Applications, pp. 310–317 (2010). https://doi.org/10.1109/FPL.2010.69

  12. Yang, J., Ma, L., Zhao, K., Cai, Y., Ngai, T.-F.: Early stage real-time SoC power estimation using RTL instrumentation. In: The 20th Asia and South Pacific Design Automation Conference, pp. 779–784 (2015). https://doi.org/10.1109/ASPDAC.2015.7059105

  13. Xie, Z., Xu, X., Walker, M., Knebel, J., Palaniswamy, K., Hebert, N., Hu, J., Yang, H., Chen, Y., Das, S.: APOLLO: an automated power modeling framework for runtime power introspection in high-volume commercial microprocessors. In: The 54th Annual IEEE/ACM International Symposium on Microarchitecture

    Google Scholar 

  14. Van den Steen, S., et al.: Analytical processor performance and power modeling using micro-architecture independent characteristics. IEEE Trans. Comput. 65(12), 3537–3551 (2016). https://doi.org/10.1109/TC.2016.2547387

    MathSciNet  MATH  Google Scholar 

  15. Lee, D., John, L.K., Gerstlauer, A.: Dynamic power and performance back-annotation for fast and accurate functional hardware simulation. In: 2015 Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 1126–1131 (2015)

    Google Scholar 

  16. Stockman, M., et al.: A novel approach to memory power estimation using machine learning. In: 2010 International Conference on Energy Aware Computing, pp. 1–3 (2010). https://doi.org/10.1109/ICEAC.2010.5702284

  17. Zhou, J., et al.: Graph Neural Networks: A Review of Methods andApplications (2018). CoRR, vol. abs/1812.08434

    Google Scholar 

  18. Kipf, T.N., Welling, M.: Semi-Supervised Classification with Graph Convolutional Networks (2016). CoRR, vol. abs/1609.02907

    Google Scholar 

  19. Shuai, B., Zuo, Z., Wang, G., Wang, B.: DAG-Recurrent Neural Networks For Scene Labeling (2015). CoRR, vol. abs/1509.00552

    Google Scholar 

  20. Wang, H., et al.: GCN-RL circuit designer: transferable transistor sizing with graph neural networks and reinforcement learning. In: 2020 57th ACM/IEEE Design Automation Conference (DAC), pp. 1–6 (2020). https://doi.org/10.1109/DAC18072.2020.9218757

  21. Ma, Y., Ren, H., Khailany, B., Sikka, H., Luo, L., Natarajan, K., Yu, B.: High performance graph convolutional networks with applications in testability analysis. In: Proceedings of the 56th Annual Design Automation Conference 2019 (DAC ’19), Article 18, pp. 1–6. Association for Computing Machinery, New York (2019). https://doi.org/10.1145/3316781.3317

  22. Zhang, Y., Ren, H., Khailany, B.: GRANNITE: graph neural network inference for transferable power estimation. In: Proceedings of the 57th ACM/EDAC/IEEE Design Automation Conference (DAC ’20), Article 60, pp. 1–6. IEEE Press, Piscataway (2020)

    Google Scholar 

  23. Kunal, K., Poojary, J., Dhar, T., Madhusudan, M., Harjani, R., Sapatnekar, S.S.: A general approach for identifying hierarchical symmetry constraints for analog circuit layout. In: Proceedings of the 39th International Conference on Computer-Aided Design (ICCAD ’20), Article 120, pp. 1–8. Association for Computing Machinery, New York (2020). https://doi.org/10.1145/3400302.3415685

  24. Lu, Y.-C., Nath, S., Kiran Pentapati, S.S., Lim, S.K.: A fast learning-driven signoff power optimization framework. In: 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD), pp. 1–9 (2020)

    Google Scholar 

  25. Zhou, Y., Ren, H., Zhang, Y., Keller, B., Khailany, B., Zhang, Z.: PRIMAL: power inference using machine learning. In: Proceedings of the 56th Annual Design Automation Conference 2019 (DAC ’19), Article 39, pp. 1–6. Association for Computing Machinery, New York (2019). https://doi.org/10.1145/3316781.3317884

  26. Mason, L., et al.: Boosting Algorithms as Gradient Descent. In: Proceedings of the Advances in Neural Information Processing Systems (2000)

    Google Scholar 

  27. Jolliffe, I.: Principal component analysis. In: International Encyclopedia of Statistical Science. Springer, Berlin (2011)

    Google Scholar 

  28. Grover, A., Leskovec, J.: node2vec: scalable feature learning for networks. In: International Conference on Knowledge Discovery and Data Mining (2016)

    Google Scholar 

  29. Maaten, L.v.d., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. (2008)

    Google Scholar 

  30. Hagberg, A., et al.: Exploring network structure, dynamics, and function using networkX. Technical report. Los Alamos National Lab. (LANL), Los Alamos (2008)

    Google Scholar 

  31. Karypis, G., Kumar, V.: A fast and high quality multilevel scheme for partitioning irregular graphs. J. Scient. Comput. (1998)

    Google Scholar 

  32. Keras: The Python Deep Learning library (2018). https://keras.io/

  33. Pedregosa, F., et al.: Scikit-Learn: machine learning in python. J. Mach. Learn. Res. (2011)

    Google Scholar 

  34. Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the International Conference on Knowledge Discovery and Data Mining (2016)

    Google Scholar 

  35. OpenCores.org, Fixed Point Math Library for Verilog :: Manual (2018). https://opencores.org/project/verilog_fixed_point_math_library/manual

  36. Asanović, K., et al.: The Rocket Chip Generator. Technical Report. UCB/EECS-2016-17. Department of Electrical Engineering and Computer Sciences, University of California, Berkeley (2016)

    Google Scholar 

  37. Ma, N., et al.: Shufflenet v2: practical guidelines for efficient CNN architecture design (2018). Preprint. arXiv:1807.11164

    Google Scholar 

  38. Chatterjee, D., DeOrio, A., Bertacco, V.: Event-driven gate-level simulation with GP-GPUs. In: DAC, pp. 557–562 (2009)

    Google Scholar 

  39. Holst, S., Imhof, M.E., Wunderlich, H.-J.: High-throughput logic timing simulation on GPGPUs. TODAES 20, 1–22 (2015)

    Article  Google Scholar 

  40. Zhu, Y., Wang, B., Deng, Y.: Massively parallel logic simulation with GPUs. Trans. Design Automat. Electron. Syst. 16(3), 29:1–29:20 (2011)

    Google Scholar 

  41. Paszke, A., et al.: Automatic differentiation in pytorch. In: NIPSW (2017)

    Google Scholar 

  42. Wang, M., et al.: Deep graph library: towards efficient and scalable deep learning on graphs. In: ICLR Workshop on Representation Learning on Graphs and Manifolds (2019)

    Google Scholar 

  43. Khailany, B., et al.: A modular digital VLSI flow for high-productivity SoC design. In: DAC, pp. 72:1–72:6 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yanqing Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Zhang, Y. (2022). Deep Learning for Power and Switching Activity Estimation. In: Ren, H., Hu, J. (eds) Machine Learning Applications in Electronic Design Automation. Springer, Cham. https://doi.org/10.1007/978-3-031-13074-8_4

Download citation

Publish with us

Policies and ethics