Skip to main content

Part of the book series: SpringerBriefs in Statistics ((BRIEFSSTATIST))

  • 849 Accesses

Abstract

This short chapter provides a few concluding remarks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Binois M, Ginsbourger D, Roustant O (2015) A warped kernel improving robustness in bayesian optimization via random embeddings. In: Dhaenens C, Jourdan L, Marmion ME (eds) Learning and intelligent optimization. Springer International Publishing, Cham, pp 281–286

    Chapter  Google Scholar 

  • Binois M, Ginsbourger D, Roustant O (2020) On the choice of the low-dimensional domain for global optimization via random embeddings. J Global Optim 76(1):69–90

    Article  MathSciNet  Google Scholar 

  • Eriksson D, Dong K, Lee E, Bindel D, Wilson AG (2018) Scaling gaussian process regression with derivatives. In: Bengio S, Wallach H, Larochelle H, Grauman K, Cesa-Bianchi N, Garnett R (eds) Advances in neural information processing systems, vol 31. Curran Associates, Inc., Red Hook, pp 6867–6877

    Google Scholar 

  • Eriksson D, Pearce M, Gardner J, Turner RD, Poloczek M (2019) Scalable global optimization via local bayesian optimization. In: Wallach H, Larochelle H, Beygelzimer A, d’ Alché-Buc F, Fox E, Garnett R (eds) Advances in neural information processing systems, vol 32. Curran Associates, Inc., Red Hook, pp 5497–5508

    Google Scholar 

  • Gonzalez J, Dai Z, Hennig P, Lawrence N (2016) Batch Bayesian optimization via local penalization. In: Gretton A, Robert CC (eds) Proceedings of the 19th international conference on artificial intelligence and statistics, Cadiz. Proceedings of machine learning research, vol 51, pp 648–657

    Google Scholar 

  • Hernández-Lobato JM, Requeima J, Pyzer-Knapp EO, Aspuru-Guzik A (2017) Parallel and distributed thompson sampling for large-scale accelerated exploration of chemical space. In: Precup D, Teh YW (eds) Proceedings of the 34th international conference on machine learning. Proceedings of machine learning research, vol 70, pp 1470–1479

    Google Scholar 

  • Kandasamy K, Krishnamurthy A, Schneider J, Poczos B (2018) Parallelised bayesian optimisation via thompson sampling. In: Storkey A, Perez-Cruz F (eds) Proceedings of the twenty-first international conference on artificial intelligence and statistics. Proceedings of machine learning research, vol 84, pp 133–142

    Google Scholar 

  • Kirschner J, Mutny M, Hiller N, Ischebeck R, Krause A (2019) Adaptive and safe Bayesian optimization in high dimensions via one-dimensional subspaces. In: Chaudhuri K, Salakhutdinov R (eds) Proceedings of the 36th international conference on machine learning, PMLR. Proceedings of machine learning research, vol 97, pp 3429–3438

    Google Scholar 

  • Letham B, Calandra R, Rai A, Bakshy E (2020) Re-examining linear embeddings for high-dimensional bayesian optimization. In: Larochelle H, Ranzato M, Hadsell R, Balcan MF, Lin H (eds) Advances in neural information processing systems, vol 33. Curran Associates, Inc., Red Hook, pp 1546–1558

    Google Scholar 

  • Mathesen L, Pedrielli G, Ng SH, Zabinsky ZB (2021) Stochastic optimization with adaptive restart: a framework for integrated local and global learning. J Global Optim 79:87–110

    Article  MathSciNet  Google Scholar 

  • Mutny M, Krause A (2018) Efficient high dimensional bayesian optimization with additivity and quadrature fourier features. In: Bengio S, Wallach H, Larochelle H, Grauman K, Cesa-Bianchi N, Garnett R (eds) Advances in neural information processing systems, vol 31. Curran Associates, Inc., Red Hook, pp 9005–9016

    Google Scholar 

  • Nayebi A, Munteanu A, Poloczek M (2019) A framework for Bayesian optimization in embedded subspaces. In: Chaudhuri K, Salakhutdinov R (eds) Proceedings of the 36th international conference on machine learning, PMLR. Proceedings of machine learning research, vol 97, pp 4752–4761

    Google Scholar 

  • Oh C, Gavves E, Welling M (2018) BOCK: Bayesian optimization with cylindrical kernels. In: Proceedings of the 35th international conference on machine learning, ICML 2018, vol 80, pp 3868–3877

    Google Scholar 

  • Rolland P, Scarlett J, Bogunovic I, Cevher V (2018) High-dimensional Bayesian optimization via additive models with overlapping groups. Proceedings of the twenty-first international conference on artificial intelligence and statistics, pp 298–307

    Google Scholar 

  • Wang Z, Hutter F, Zoghi M, Matheson D, de Freitas N (2016) Bayesian optimization in a billion dimensions via random embeddings. J Artif Intell Res 55:361–387

    Article  MathSciNet  Google Scholar 

  • Wang Z, Gehring C, Kohli P, Jegelka S (2018) Batched large-scale Bayesian optimization in high dimensional spaces. Proceedings of the twenty-first international conference on artificial intelligence and statistics, pp 745–754

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Pourmohamad, T., Lee, H.K.H. (2021). Conclusions. In: Bayesian Optimization with Application to Computer Experiments. SpringerBriefs in Statistics. Springer, Cham. https://doi.org/10.1007/978-3-030-82458-7_5

Download citation

Publish with us

Policies and ethics