Abstract
Almost every deep learning model has a large number of hyperparameters. Choosing the proper hyperparameters is one of the most common problems in AutoML. A small change in one of the model's hyperparameters can significantly change its performance. Hyperparameter Optimization (HPO) is the first and most effective step in deep learning model tuning. Due to its ubiquity, Hyperparameter Optimization is sometimes regarded as synonymous with AutoML. This chapter will examine various neural network designs and how NNI can be applied to optimize their hyperparameters for particular problems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature
About this chapter
Cite this chapter
Gridin, I. (2022). Hyperparameter Optimization. In: Automated Deep Learning Using Neural Network Intelligence. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-8149-9_2
Download citation
DOI: https://doi.org/10.1007/978-1-4842-8149-9_2
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-8148-2
Online ISBN: 978-1-4842-8149-9
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)