Skip to main content

OmniOpt – A Tool for Hyperparameter Optimization on HPC

Part of the Lecture Notes in Computer Science book series (LNTCS,volume 12761)


Hyperparameter optimization is a crucial task in numerous applications of numerical modelling techniques. Methods as diverse as classical simulations and the great variety of machine learning techniques used nowadays, require an appropriate choice of their hyperparameters (HPs). While for classical simulations, calibration to measured data by numerical optimization techniques has a long tradition, the HPs of neural networks are often chosen by a mixture of grid search, random search and manual tuning.

In the present study the expert tool “OmniOpt” is introduced, which allows to optimize the HPs of a wide range of problems, ranging from classical simulations to different kinds of neural networks. Thereby, the emphasis is on versatility and flexibility for the user in terms of the applications and the choice of its HPs to be optimized. Moreover, the optimization procedure – which is usually a very time-consuming task – should be performed in a highly parallel way on the HPC system Taurus at TU Dresden. To this end, a Bayesian stochastic optimization algorithm (TPE) has been implemented on the Taurus system and connected to a user-friendly graphical user interface (GUI). In addition to the automatic optimization service, there is a variety of tools for analyzing and graphically displaying the results of the optimization.

The application of OmniOpt to a practical problem from material science is presented as an example.


  • Hyperparameter optimization
  • High performance computing
  • Neural networks

This work was supported by the German Federal Ministry of Education and Research (BMBF, 01/S18026A-F) by funding the competence center for Big Data and AI “ScaDS.AI Dresden/Leipzig”.

This is a preview of subscription content, access via your institution.

Buying options

USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-90539-2_19
  • Chapter length: 12 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
USD   89.00
Price excludes VAT (USA)
  • ISBN: 978-3-030-90539-2
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   119.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.


  1. Feurer, M., Hutter, F.: Hyperparameter optimization. In: Hutter, F., Kotthoff, L., Vanschoren, J. (eds.) Automated Machine Learning. TSSCML, pp. 3–33. Springer, Cham (2019).

    CrossRef  Google Scholar 

  2. Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13, 281–305 (2012)

    MathSciNet  MATH  Google Scholar 

  3. Shahriari, B., Swersky, K., Wang, Z., Adams, R., de Freitas, N.: Taking the human out of the loop: a review of Bayesian optimization. Proc. IEEE 104(1), 148–175 (2016)

    Google Scholar 

  4. Bergstra, J., et al.: Hyperopt: a Python library for model selection and hyperparameter optimization. Comput. Sci. Discov. 8, 014008 (2015).

  5. Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., Hutter, F.: Efficient and robust automated machine learning. In: Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28, pp. 2962–2970. Curran Associates, Inc. (2015)

    Google Scholar 

  6. Liaw, R., Liang, E., Nishihara, R., Moritz, P., Gonzalez, J.E., Stocia, I.: Tune: a research platform for distributed model selection and training. arXiv:1807.05118 (2018)

  7. Moritz, P., et al.: Ray: a distributed framework for emerging AI applications. arXiv:1712.05889 (2017)

  8. Rapin, J., Teytaud O.: Nevergrad – a gradient-free optimization platform, GitHub repository (2018).

  9. Sergeev A., Del Balso, M.: Horovod: fast and easy distributed deep learning in TensorFlow. arXiv:1802.05799 (2018)

  10. ZIH homepage.

  11. Bergstra, J., Bardenet, R., Bengio, Y., Kégl, B.: Algorithms for hyper-parameter optimization. In: Advances in Neural Information Processing Systems, vol. 24 (2011).

  12. Bullx documentation.

  13. Yoo, A.B., Jette, M.A., Grondona, M.: SLURM: simple linux utility for resource management.

  14. MongoDB homepage.

  15. Zscheyge, M., Böhm, R., Hornig, A., Gerritzen, J., Gude, M.: Rate dependent non-linear mechanical behaviour of continuous fibre-reinforced thermoplastic composites – experimental characterisation and viscoelastic-plastic damage modelling. Mater. Des. 193, 108827 (2020)

    CrossRef  Google Scholar 

  16. Böhm, R., Gude, M., Hufenbach, W.: A phenomenologically based damage model for textile composites with crimped reinforcement. Comput. Sci. Technol. 70, 81–87 (2010)

    CrossRef  Google Scholar 

  17. Gude, M., Hufenbach, W., Ebert, C.: The strain-rate-dependent material and failure behaviour of 2D and 3D non-crimp glass-fibre-reinforced composites. Mech. Compos. Mater. 45, 467 (2009).

    CrossRef  Google Scholar 

  18. Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015).

  19. Heinrich, J., Weiskopf, D.: State of the art of parallel coordinates. In: Sbert, M., Szirmay-Kalos, L. (eds.) Eurographics 2013 - State of the Art Reports, pp. 95–116 (2013).

Download references


The authors would like to thank Taras Lazariv for his feedback and support which helped to improve this work.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Peter Winkler .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Winkler, P., Koch, N., Hornig, A., Gerritzen, J. (2021). OmniOpt – A Tool for Hyperparameter Optimization on HPC. In: Jagode, H., Anzt, H., Ltaief, H., Luszczek, P. (eds) High Performance Computing. ISC High Performance 2021. Lecture Notes in Computer Science(), vol 12761. Springer, Cham.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-90538-5

  • Online ISBN: 978-3-030-90539-2

  • eBook Packages: Computer ScienceComputer Science (R0)