Advertisement

Bayesian Optimisation for Objective Functions with Varying Smoothness

  • A. V. Arun KumarEmail author
  • Santu Rana
  • Cheng Li
  • Sunil Gupta
  • Alistair Shilton
  • Svetha Venkatesh
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11919)

Abstract

Bayesian optimisation is a popular method in optimising complex, unknown and expensive objective functions. In complex design optimisation problems, the additional information about the smoothness, monotonicity or the modality of the unknown objective functions can be obtained either from the domain expertise or from the problem environment. Incorporating such additional information can potentially enhance the performance of the optimisation. We propose a methodology to incorporate the aforesaid extra information to have a better fitted surrogate model of the unknown objective function. Specifically, for Gaussian Process regression, we propose a covariance function to encompass varying smoothness across the input space through a parametric function whose parameters are tuned from the observations. Our experiments on both synthetic benchmark functions and real-world applications demonstrate that embodying such additional knowledge accelerates the convergence.

Keywords

Bayesian optimisation Global optimisation Gaussian Process Spatially varying kernels 

Notes

Acknowledgements

This research was partially funded by the Australian Government through the Australian Research Council (ARC). Prof Venkatesh is the recipient of an ARC Australian Laureate Fellowship (FL170100006).

References

  1. 1.
    Brochu, E., Cora, V.M., De Freitas, N.: A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599 (2010)
  2. 2.
    Cora, V.M.: Model-based active learning in hierarchical policies. Ph.D. thesis, University of British Columbia (2008)Google Scholar
  3. 3.
    Czarnecki, W.M., Podlewska, S., Bojarski, A.J.: Robust optimization of SVM hyperparameters in the classification of bioactive compounds. J. Cheminformatics 7(1), 38 (2015)CrossRefGoogle Scholar
  4. 4.
    Dalal, C.A., Pavlovic, V., Kopp, R.E.: Intrinsic non-stationary covariance function for climate modeling. arXiv preprint arXiv:1507.02356 (2015)
  5. 5.
    Diehl, C.P., Cauwenberghs, G.: SVM incremental learning, adaptation and optimization. In: 2003 Proceedings of the International Joint Conference on Neural Networks, vol. 4, pp. 2685–2690. IEEE (2003)Google Scholar
  6. 6.
    Gibbs, M.N.: Bayesian Gaussian processes for regression and classification. Ph.D. thesis, Citeseer (1998)Google Scholar
  7. 7.
    Gönen, M., Alpaydın, E.: Multiple kernel learning algorithms. J. Mach. Learn. Res. 12(Jul), 2211–2268 (2011)MathSciNetzbMATHGoogle Scholar
  8. 8.
    Joy, T.T., Rana, S., Gupta, S., Venkatesh, S.: A flexible transfer learning framework for Bayesian optimization with convergence guarantee. Expert Syst. Appl. 115, 656–672 (2019)CrossRefGoogle Scholar
  9. 9.
    Kushner, H.J.: A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. J. Basic Eng. 86(1), 97–106 (1964)CrossRefGoogle Scholar
  10. 10.
    Li, C., et al.: Rapid Bayesian optimisation for synthesis of short polymer fiber materials. Sci. Rep. 7(1), 5683 (2017)CrossRefGoogle Scholar
  11. 11.
    Li, C., et al.: Accelerating experimental design by incorporating experimenter hunches. arXiv preprint arXiv:1907.09065 (2019)
  12. 12.
    Martinez-Cantin, R.: Funneled Bayesian optimization for design, tuning and control of autonomous systems. IEEE Trans. Cybern. 99, 1–12 (2018)Google Scholar
  13. 13.
    Paciorek, C.J., Schervish, M.J.: Nonstationary covariance functions for Gaussian process regression. In: Advances in Neural Information Processing Systems, pp. 273–280 (2004)Google Scholar
  14. 14.
    Shah, A., Wilson, A., Ghahramani, Z.: Student-t processes as alternatives to Gaussian processes. In: Artificial intelligence and Statistics, pp. 877–885 (2014)Google Scholar
  15. 15.
    Snelson, E., Ghahramani, Z., Rasmussen, C.E.: Warped Gaussian processes. In: Advances in Neural Information Processing Systems, pp. 337–344 (2004)Google Scholar
  16. 16.
    Snoek, J., Swersky, K., Zemel, R., Adams, R.: Input warping for Bayesian optimization of non-stationary functions. In: International Conference on Machine Learning, pp. 1674–1682 (2014)Google Scholar
  17. 17.
    Souza, J.R., Marchant, R., Ott, L., Wolf, D.F., Ramos, F.: Bayesian optimisation for active perception and smooth navigation. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 4081–4087. IEEE (2014)Google Scholar
  18. 18.
    Taddy, M.A., Lee, H.K., Gray, G.A., Griffin, J.D.: Bayesian guided pattern search for robust local optimization. Technometrics 51(4), 389–401 (2009)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Williams, C.K., Rasmussen, C.E.: Gaussian Processes for Machine Learning, vol. 2. MIT Press, Cambridge (2006)zbMATHGoogle Scholar
  20. 20.
    Wilson, J., Hutter, F., Deisenroth, M.: Maximizing acquisition functions for Bayesian optimization. In: Advances in Neural Information Processing Systems, pp. 9884–9895 (2018)Google Scholar
  21. 21.
    Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. Roy. Stat. Soc. Ser. B (Stat. Methodol.) 67(2), 301–320 (2005)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • A. V. Arun Kumar
    • 1
    Email author
  • Santu Rana
    • 1
  • Cheng Li
    • 2
  • Sunil Gupta
    • 1
  • Alistair Shilton
    • 1
  • Svetha Venkatesh
    • 1
  1. 1.Applied Artificial Intelligence Institute (A²I²)Deakin UniversityGeelongAustralia
  2. 2.National University of Singapore (NUS)SingaporeSingapore

Personalised recommendations