Advertisement

SynTunSys: A Synthesis Parameter Autotuning System for Optimizing High-Performance Processors

  • Matthew M. ZieglerEmail author
  • Hung-Yi Liu
  • George Gristede
  • Bruce Owens
  • Ricardo Nigaglioni
  • Jihye Kwon
  • Luca P. Carloni
Chapter

Abstract

Advanced logic and physical synthesis tools provide numerous options and parameters that can drastically impact design quality; however, the large number of options leads to a complex design space difficult for human designers to navigate. By employing intelligent search strategies and parallel computing we can tackle this parameter tuning problem, thus automating one of the key design tasks conventionally performed by a human designer. To fully utilize the optimization potential of these tools, we propose SynTunSys, a system that adds a new level of abstraction between designers and design tools for managing the design space exploration process. SynTunSys takes control of the synthesis parameter tuning process, i.e., job submission, results analysis, and next-step decision making, automating one of the more difficult decision processes faced by designers. This system has been employed for optimizing multiple IBM high-performance server chips and presents numerous opportunities for future intelligent automation research.

References

  1. 1.
    J.D. Warnock et al., 22nm next-generation IBM system z microprocessor, in ISSCC, 2015Google Scholar
  2. 2.
    M.M. Ziegler et al., A synthesis-parameter tuning system for autonomous design-space exploration, in DATE, 2016Google Scholar
  3. 3.
    M.M. Ziegler, H.-Y. Liu, L.P. Carloni, Scalable auto-tuning of synthesis parameters for optimizing high-performance processors, in ISLPED, 2016Google Scholar
  4. 4.
    M.M. Ziegler et al., POWER8 design methodology innovations for improving productivity and reducing power, in CICC, 2014Google Scholar
  5. 5.
    L. Trevillyan et al., An integrated environment for technology closure of deep-submicron IC designs. IEEE Des. Test Comput. 21(1), 14–22 (2004)CrossRefGoogle Scholar
  6. 6.
    M.M. Ziegler, G.D. Gristede, V.V. Zyuban, Power reduction by aggressive synthesis design space exploration, in ISLPED, 2013Google Scholar
  7. 7.
    E.J. Fluhr et al., POWER8: a 12-core server-class processor in 22nm SOI with 7.6Tb/s off-chip bandwidth, in ISSCC, 2014Google Scholar
  8. 8.
    C. Gonzalez et al., POWER9: a processor family optimized for cognitive computing with 25Gb/s accelerator links and 16Gb/s PCIe Gen4, in ISSCC, 2017Google Scholar
  9. 9.
    C. Berry et al., IBM z14: 14nm microprocessor for the next-generation mainframe, in ISSCC, 2018Google Scholar
  10. 10.
    Y. Koren, R. Bell, C. Volinsky, Matrix factorization techniques for recommender systems. Computer 42, 30–37 (2009)CrossRefGoogle Scholar
  11. 11.
    M. Anwar, S. Saha, M.M. Ziegler, L. Reddy, Early scenario pruning for efficient design space exploration in physical synthesis, in International Conference on VLSI Design (VLSID), 2016Google Scholar
  12. 12.
    S. Zhou et al., Utopia: A Load Sharing Facility for Large, Heterogeneous Distributed Computer Systems (Wiley, New York, 1993)Google Scholar
  13. 13.
    G.P. Mariani et al., DeSpErate++: an enhanced design space exploration framework using predictive simulation scheduling. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 34(2), 293–306 (2015)CrossRefGoogle Scholar
  14. 14.
    S. Amaran et al., Simulation optimization: a review of algorithms and applications. 4OR: Q. J. Oper. Res. 12, 301–333 (2014)Google Scholar
  15. 15.
    G. Fursin et al., Milepost GCC: machine learning enabled self-tuning compiler. Int. J. Parallel Prog. 39, 296–327 (2011)CrossRefGoogle Scholar
  16. 16.
    A. Arcuri, G. Fraser, Parameter tuning or default values? An empirical investigation in search-based software engineering. Empir. Softw. Eng. 18(3), 594–623 (2013)CrossRefGoogle Scholar
  17. 17.
    H.H. Hoos, Programming by optimization. Commun. ACM 55(2), 70–80 (2012)CrossRefGoogle Scholar
  18. 18.
    G.I. Diaz, A. Fokoue-Nkoutche, G. Nannicini, H. Samulowitz, An effective algorithm for hyperparameter optimization of neural networks. IBM J. Res. Dev. 61(4), 9:1–9:11 (2017)Google Scholar
  19. 19.
    O. Azizi et al., An integrated framework for joint design space exploration of microarchitecture and circuits, in DATE, 2010Google Scholar
  20. 20.
    H.-Y. Liu, L.P. Carloni, On learning-based methods for design-space exploration with high-level synthesis, in DAC, 2013Google Scholar
  21. 21.
    M.K. Papamichael, P. Milder, J.C. Hoe, Nautilus: fast automated IP design space search using guided genetic algorithms, in DAC, 2015Google Scholar
  22. 22.
    N. Kapre et al., Driving timing convergence of FPGA designs through machine learning and cloud computing, in FCCM, 2015Google Scholar
  23. 23.
    C. Xu et al., A parallel bandit-based approach for autotuning FPGA compilation, in FPGA, 2017Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Matthew M. Ziegler
    • 1
    Email author
  • Hung-Yi Liu
    • 2
  • George Gristede
    • 1
  • Bruce Owens
    • 3
  • Ricardo Nigaglioni
    • 4
  • Jihye Kwon
    • 5
  • Luca P. Carloni
    • 5
  1. 1.IBM T. J. Watson Research CenterYorktown HeightsUSA
  2. 2.Intel Technology and Manufacturing GroupHillsboroUSA
  3. 3.IBM Systems and Technology GroupRochesterUSA
  4. 4.IBM Systems and Technology GroupAustinUSA
  5. 5.Department of Computer ScienceColumbia UniversityNew YorkUSA

Personalised recommendations