Skip to main content

The Interplay of Online and Offline Machine Learning for Design Flow Tuning

  • Chapter
  • First Online:
Machine Learning Applications in Electronic Design Automation
  • 1714 Accesses

Abstract

Modern logic and physical synthesis tools provide numerous options and parameters that can drastically affect design quality; however, the large number of options leads to a complex design space difficult for human designers to navigate. Fortunately, machine learning approaches and cloud computing environments are well suited for tackling complex parameter-tuning problems like those seen in VLSI design flows. This chapter proposes a holistic approach where online and offline machine learning approaches work together for tuning industrial design flows. We provide an overview of recent research on design flow tuning, spanning the application domains of high-level synthesis (HLS), field-programmable gate array (FPGA) synthesis and place-and-route, and VLSI logic synthesis and physical design (LSPD). We highlight the industrial design flow tuner SynTunSys (STS) as a case study. This system has been used to optimize multiple high-performance processors. STS consists of an online system that optimizes designs and generates data for a recommender system that performs offline training and recommendation. Experimental results show the collaboration between STS online and offline machine learning systems as well as insight from human designers provides best-of-breed results. Finally, we discuss potential new directions for design flow tuning research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    We describe only primitives using CAD tool parameters for brevity, although primitives may consist of design flow parameters outside of the CAD tool and include snippets of code to modify the design flow.

  2. 2.

    We consider QoR and PPA synonymous terms, although we use QoR in the STS context since we often tune numerous metrics, beyond performance, power, and area.

  3. 3.

    Typical timing metrics of interest are (i) internal slack, e.g., latch-to-latch slack (L2L), (ii) worst negative slack (WNS), and (iii) total negative slack (TNS).

  4. 4.

    The primitive library and historical primitive performance are two ways the online system improves “offline”; however, a new online system can still be effective without these evolving improvements.

  5. 5.

    The term variables is often referred to as parameters or weights in other machine learning applications and recommender systems. Here, we refer to them as variables to avoid the confusion with the design flow parameters.

  6. 6.

    CP and Tucker are two widely used methods for tensor decomposition. The acronym CP stands for (1) CANDECOMP (canonical decomposition)/PARAFAC (parallel factor analysis) or for (2) canonical polyadic (decomposition). Tucker is a generalization of CP, where the core tensor is not super-diagonal and contains hidden features [27]. With CP, it is possible to explicitly represent the latent information for each macro separately.

  7. 7.

    A primitive is typically categorized based on the expected metrics it will affect, which follows Consideration 3 in Sect. 13.2.1.

  8. 8.

    While, in concept, empirically comparing multiple selection and blending algorithms is feasible, the practical compute effort would be quite expensive.

References

  1. Abdi, H., Williams, L.J.: Principal component analysis. Wiley Interdiscip. Rev. Comput. Stat. 2(4), 433–459 (2010)

    Article  Google Scholar 

  2. Garrido-Merchán, E.C., Hernández-Lobato, D.: Dealing with categorical and integer-valued variables in Bayesian optimization with Gaussian processes. Neurocomputing 380, 20–35 (2020)

    Article  Google Scholar 

  3. Dso.ai: Ai-driven design applications. https://www.synopsys.com/implementation-and-signoff/ml-ai-design/dso-ai.html. Accessed 11 Aug 2021

  4. Cerebrus intelligent chip design. https://www.cadence.com/en_US/home/tools/digital-design-and-signoff/soc-implementation-and-floorplanning/cerebrus-intelligent-chip-explorer.html. Accessed 11 Aug 2021

  5. Jung, J., Kahng, A.B., Kim, S., Varadarajan, R.: METRICS2.1 and flow tuning in the IEEE CEDA robust design flow and OpenROAD. In: IEEE/ACM International Conference on Computer-Aided Design (ICCAD) (2021)

    Google Scholar 

  6. Ziegler, M.M., Liu, H.Y., Gristede, G., Owens, B., Nigaglioni, R., Carloni, L.P.: A synthesis-parameter tuning system for autonomous design-space exploration. In: Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 1148–1151 (2016)

    Google Scholar 

  7. Ziegler, M.M., Liu, H.Y., Gristede, G., Owens, B., Nigaglioni, R., Kwon, J., Carloni, L.P.: SynTunSys: a synthesis parameter autotuning system for optimizing high-performance processors. In: Machine Learning in VLSI Computer-Aided Design, pp. 539–570. Springer, Berlin (2019)

    Google Scholar 

  8. Taylor, S.: POWER7+: IBM’s next generation POWER microprocessor. In: Hot Chips 24 (2012)

    Google Scholar 

  9. Ziegler, M.M., Gristede, G.D., Zyuban, V.V.: Power reduction by aggressive synthesis design space exploration. In: International Symposium on Low Power Electronics and Design (ISLPED), pp. 421–426 (2013)

    Google Scholar 

  10. Ziegler, M.M., Liu, H.Y., Carloni, L.P.: Scalable auto-tuning of synthesis parameters for optimizing high-performance processors. In: International Symposium on Low Power Electronics and Design (ISPLED), pp. 180–185 (2016)

    Google Scholar 

  11. Kwon, J., Ziegler, M.M., Carloni, L.P.: A learning-based recommender system for autotuning design fiows of industrial high-performance processors. In: ACM/IEEE Design Automation Conference (DAC) (2019)

    Google Scholar 

  12. Liu, H.Y., Carloni, L.P.: On learning-based methods for design-space exploration with high-level synthesis. In: Design Automation Conference (DAC) (2013)

    Google Scholar 

  13. Meng, P., Althoff, A., Gautier, Q., Kastner, R.: Adaptive threshold non-Pareto elimination: re-thinking machine learning for system level design space exploration on FPGAs. In: Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 918–923 (2016)

    Google Scholar 

  14. Xu, C., Liu, G., Zhao, R., Yang, S., Luo, G., Zhang, Z.: A parallel bandit-based approach for autotuning FPGA compilation. In: International Symposium on Field-Programmable Gate Arrays (FPGA), pp. 157–166 (2017)

    Google Scholar 

  15. Ansel, J., Kamil, S., Veeramachaneni, K., Ragan-Kelley, J., Bosboom, J., O’Reilly, U.M., Amarasinghe, S.: Opentuner: an extensible framework for program autotuning. In: Proceedings of the 23rd International Conference on Parallel Architectures and Compilation, pp. 303–316 (2014)

    Google Scholar 

  16. Ma, Y., Yu, Z., Yu, B.: CAD tool design space exploration via Bayesian optimization. In: Workshop on Machine Learning for CAD (MLCAD) (2019)

    Google Scholar 

  17. Ziegler, M.M., Puri, R., Philhower, B., Franch, R., Luk, W., Leenstra, J., Verwegen, P., Fricke, N., Gristede, G., Fluhr, E., Zyuban, V.: Power8 design methodology innovations for improving productivity and reducing power. In: Custom Integrated Circuits Conference (CICC) (2014)

    Google Scholar 

  18. Kwon, J., Carloni, L.P.: Transfer learning for design-space exploration with high-level synthesis. In: Workshop on Machine Learning for CAD (MLCAD), pp. 163–168 (2020)

    Google Scholar 

  19. Wang, Z., Schafer, B.C.: Machine leaming to set meta-heuristic specific parameters for high-level synthesis design space exploration. In: ACM/EDAC/IEEE Design Automation Conference (DAC) (2020)

    Google Scholar 

  20. Agnesina, A., Lim, S.K., Lepercq, E., Cid, J.E.D.: Improving FPGA-based logic emulation systems through machine learning. ACM Trans. Des. Autom. Electron. Syst. 25(5), 1–20 (2020)

    Article  Google Scholar 

  21. Xie, Z., Fang, G.Q., Huang, Y.H., Ren, H., Zhang, Y., Khailany, B., Fang, S.Y., Hu, J., Chen, Y., Barboza, E.C.: FIST: A feature-importance sampling and tree-based method for automatic design flow parameter tuning. In: Asia and South Pacific Design Automation Conference (ASP-DAC), pp. 19–25 (2020)

    Google Scholar 

  22. Davis, R., Franzon, P., Francisco, L., Huggins, B., Jain, R.: Fast and accurate PPA modeling with transfer learning. In: IEEE/ACM International Conference on Computer-Aided Design (ICCAD) (2021)

    Google Scholar 

  23. Mirhoseini, A., Goldie, A., Yazgan, M., Jiang, J.W., Songhori, E., Wang, S., Lee, Y.J., Johnson, E., Pathak, O., Nazi, A., et al.: A graph placement methodology for fast chip design. Nature 594(7862), 207–212 (2021)

    Article  Google Scholar 

  24. Koren, Y., Bell, R., Volinsky, C.: Matrix factorization techniques for recommender systems. IEEE Comput. 42(8) (2009)

    Google Scholar 

  25. Zhang, F., Yuan, N.J., Zheng, K., Lian, D., Xie, X., Rui, Y.: Exploiting dining preference for restaurant recommendation. In: International Conference on World Wide Web (2016)

    Google Scholar 

  26. Sidiropoulos, n.d., De Lathauwer, L., Fu, X., Huang, K., Papalexakis, E.E., Faloutsos, C.: Tensor decomposition for signal processing and machine learning. IEEE Trans. Sig. Proces. 65(13), 3551–3582 (2017)

    Google Scholar 

  27. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  28. Kapre, N., Chandrashekaran, B., Ng, H., Teo, K.: Driving timing convergence of FPGA designs through machine learning and cloud computing. In: International Symposium on Field-Programmable Custom Computing Machines, pp. 119–126. IEEE, Piscataway (2015)

    Google Scholar 

  29. Liang, R., Jung, J., Xiang, H., Reddy, L., Lvov, A., Hu, J., Nam, G.J.: Flowtuner: a multi-stage EDA flow tuner exploiting parameter knowledge transfer. In: IEEE/ACM International Conference on Computer-Aided Design (ICCAD) (2021)

    Google Scholar 

  30. Ziegler, M.M., Kwon, J., Liu, H.Y., Carloni, L.P.: Online and offline machine learning for industrial design flow tuning: (Invited-ICCAD special session paper). In: IEEE/ACM International Conference on Computer-Aided Design (ICCAD) (2021)

    Google Scholar 

  31. Wolpert, D.H.: Stacked generalization. Neural Netw. 5(2), 241–259 (1992)

    Article  Google Scholar 

  32. Sill, J., Takacs, G., Mackey, L., Lin, D.: Feature-weighted linear stacking. arXiv:0911.0460, 2009

    Google Scholar 

  33. Fluhr, E.J., Friedrich, J., Dreps, D., Zyuban, V., Still, G., Gonzalez, C., Hall, A., Hogenmiller, D., Malgioglio, F., Nett, R., Paredes, J., Pille, J., Plass, D., Puri, R., Restle, P., Shan, D., Stawiasz, K., Deniz, Z.T., Wendel, D., Ziegler, M.: 5.1 power8tm: A 12-core server-class processor in 22nm soi with 7.6tb/s off-chip bandwidth. In: International Solid-State Circuits Conference (ISSCC), pp. 96–97 (2014)

    Google Scholar 

  34. Warnock, J., Curran, B., Badar, J., Fredeman, G., Plass, D., Chan, Y., Carey, S., Salem, G., Schroeder, F., Malgioglio, F., Mayer, G., Berry, C., Wood, M., Chan, Y.H., Mayo, M., Isakson, J., Nagarajan, C., Werner, T., Sigal, L., Nigaglioni, R., Cichanowski, M., Zitz, J., Ziegler, M., Bronson, T., Strevig, G., Dreps, D., Puri, R., Malone, D., Wendel, D., Mak, P.K., Blake, M.: 22nm next-generation IBM system z microprocessor. In: International Solid-State Circuits Conference (ISSCC), pp. 1–3 (2015)

    Google Scholar 

  35. Ziegler, M.M., Bertran, R., Buyuktosunoglu, A., Bose, P.: Machine learning techniques for taming the complexity of modern hardware design. IBM J. Res. Develop. 61(4/5), 13:1–13:14 (2017)

    Google Scholar 

  36. Ziegler, M.M., Reddy, L.N., Franch, R.L.: Design flow parameter optimization with multi-phase positive nondeterministic tuning. In: Proceedings of the 2022 International Symposium on Physical Design (2022)

    Google Scholar 

  37. Kandasamy, K., Krishnamurthy, A., Schneider, J.G., Póczos, B.: Parallelised bayesian optimisation via thompson sampling. In: nternational Conference on Artificial Intelligence and Statistics (AISTATS) (2018)

    Google Scholar 

  38. Zhang, S., Yang, F., Zhou, D., Zeng, X.: An efficient asynchronous batch bayesian optimization approach for analog circuit synthesis. In: ACM/EDAC/IEEE Design Automation Conference (DAC) (2020)

    Google Scholar 

  39. Anwar, M., Saha, S., Ziegler, M.M., Reddy, L.: Early scenario pruning for efficient design space exploration in physical synthesis. In: International Conference on VLSI Design (VLSID), pp. 116–121 (2016)

    Google Scholar 

  40. Ziegler, M.M., Gristede, G.D.: Synthesis tuning system for VLSI design optimization. U.S. Patent 9910949, 2018-3-6

    Google Scholar 

  41. Schafer, B.C., Wang, Z.: High-level synthesis design space exploration: past, present, and future. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 39(10), 2628–2639 (2019)

    Article  Google Scholar 

  42. Agrawal, P., Broxterman, M., Chatterjee, B., Cuevas, P., Hayashi, K.H., Kahng, A.B., Myana, P.K., Nath, S.: Optimal scheduling and allocation for ic design management and cost reduction. ACM Trans. Des. Autom. Electron. Syst. 22(4), 293–306 (2017)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthew M. Ziegler .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Ziegler, M.M., Kwon, J., Liu, HY., Carloni, L.P. (2022). The Interplay of Online and Offline Machine Learning for Design Flow Tuning. In: Ren, H., Hu, J. (eds) Machine Learning Applications in Electronic Design Automation. Springer, Cham. https://doi.org/10.1007/978-3-031-13074-8_13

Download citation

Publish with us

Policies and ethics