Abstract
To implement a Machine Learning (ML) model in hardware (Hw), usually a first Design Space Exploration (DSE) optimizes the model hyper-parameters in search of the best ML performance, while a second DSE finds the configuration with the best Hw performance. Multiple iterations of these steps might be needed as the optimal ML model may not necessarily be implementable. To reduce the design-time and provide the designer with a single exploration environment, we propose a general framework based on Bayesian Optimization (BO) and High-Level Synthesis (HLS), which performs at once both DSEs generating efficient Pareto curves in the space of ML and Hw performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
We leave power optimization for future work.
References
Chen, Y., Song, Q., Hu, X.: Techniques for automated machine learning. ACM SIGKDD Explor. Newsl. 22(2), 35–50 (2021)
Sohrabizadeh, A., et al.: AutoDSE: Enabling Software Programmers Design Efficient FPGA Accelerators. arXiv preprint arXiv:2009.14381 (2020)
Zhao, J., et al.: Performance modeling and directives optimization for high-level synthesis on FPGA. IEEE TCAD 39(7), 1428–1441 (2019)
Mehrabi, A., et al.: Bayesian optimization for efficient accelerator synthesis. ACM TACO 18(1), 1–25 (2020)
Benmeziane, H., et al.: A Comprehensive Survey on Hardware-Aware Neural Architecture Search. arXiv preprint arXiv:2101.09336 (2021)
Parsa, M., et al.: Pabo: pseudo agent-based multi-objective Bayesian hyperparameter optimization for efficient neural accelerator design. In: 2019 IEEE/ACM ICCAD (2019)
Garrido-Merchán, E.C., Hernández-Lobato, D.: Predictive entropy search for multi-objective Bayesian optimization with constraints. Neurocomputing 361, 50–68 (2019)
Shao, Y.S., et al.: Aladdin: a pre-rtl, power-performance accelerator simulator enabling large design space exploration of customized architectures. In: 2014 ACM/IEEE 41st ISCA (2014)
Frazier, P.I.: A tutorial on Bayesian optimization. arXiv preprint arXiv:1807.02811 (2018)
Fahim, F., et al.: An Open-Source Codesign Workflow to Empower Scientific Low-Power Machine Learning Devices. ArXiv preprint arxiv:2103.05579 (2021)
Acknowledgments
This work was supported by the EMERALD project funded by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 764479.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Mansoori, M.A., Casu, M.R. (2022). Efficient Training and Hardware Co-design of Machine Learning Models. In: Saponara, S., De Gloria, A. (eds) Applications in Electronics Pervading Industry, Environment and Society. ApplePies 2021. Lecture Notes in Electrical Engineering, vol 866. Springer, Cham. https://doi.org/10.1007/978-3-030-95498-7_34
Download citation
DOI: https://doi.org/10.1007/978-3-030-95498-7_34
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-95497-0
Online ISBN: 978-3-030-95498-7
eBook Packages: EngineeringEngineering (R0)