Advertisement

PROGRESS: Progressive Reinforcement-Learning-Based Surrogate Selection

  • Stefan Hess
  • Tobias Wagner
  • Bernd Bischl
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7997)

Abstract

In most engineering problems, experiments for evaluating the performance of different setups are time consuming, expensive, or even both. Therefore, sequential experimental designs have become an indispensable technique for optimizing the objective functions of these problems. In this context, most of the problems can be considered as a black-box. Specifically, no function properties are known a priori to select the best suited surrogate model class. Therefore, we propose a new ensemble-based approach, which is capable of identifying the best surrogate model during the optimization process by using reinforcement learning techniques. The procedure is general and can be applied to arbitrary ensembles of surrogate models. Results are provided on 24 well-known black-box functions to show that the progressive procedure is capable of selecting suitable models from the ensemble and that it can compete with state-of-the-art methods for sequential optimization.

Keywords

Model-based optimization Sequential designs Black-box optimization Surrogate models Kriging Efficient global optimization Reinforcement learning 

Notes

Acknowledgements.

This paper is based on investigations of the project D5 of the Collaborative Research Center SFB/TR TRR 30 and of the project C2 of the Collaborative Research Center SFB 823, which are kindly supported by the Deutsche Forschungsgemeinschaft (DFG).

References

  1. 1.
    Bartz-Beielstein, T., Lasarczyk, C.G., Preuss, M.: Sequential parameter optimization. In: McKay, B., et al. (eds.) Proceedings of the 2005 Congress on Evolutionary Computation (CEC’05), Edinburgh, Scotland, pp. 773–780. IEEE Press, Los Alamitos (2005)CrossRefGoogle Scholar
  2. 2.
    Bischl, B., Lang, M., Mersmann, O., Rahnenfuehrer, J., Weihs, C.: BatchJobs and BatchExperiments: abstraction mechanisms for using R in batch environments. Submitted to Journal of Statistical Software (2012a)Google Scholar
  3. 3.
    Bischl, B., Mersmann, O., Trautmann, H., Weihs, C.: Resampling methods for meta-model validation with recommendations for evolutionary computation. Evol. Comput. 20(2), 249–275 (2012b)CrossRefGoogle Scholar
  4. 4.
    Bursztyn, D., Steinberg, D.M.: Comparison of designs for computer experiments. J. Stat. Planning Infer. 136(3), 1103–1119 (2006)MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    DaCosta, L., Fialho, A., Schoenauer, M., Sebag, M.: Adaptive operator selection with dynamic multi-armed bandits. In: Proceedings of the 10th Conference Genetic and Evolutionary Computation (GECCO ’08), pp. 913–920. ACM, New York (2008)Google Scholar
  6. 6.
    Friedman, J.: Multivariate adaptive regression splines. Ann. Stat. 19(1), 1–67 (1991)CrossRefMATHGoogle Scholar
  7. 7.
    Friedman, J.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29(5), 1189–1232 (2001)CrossRefMATHGoogle Scholar
  8. 8.
    Friese, M., Zaefferer, M., Bartz-Beielstein, T., Flasch, O., Koch, P., Konen, W., Naujoks, B.: Ensemble based optimization and tuning algorithms. In: Hoffmann, F., Hüllermeier, E. (eds.) Proceedings of the 21. Workshop Computational Intelligence, pp. 119–134 (2011)Google Scholar
  9. 9.
    Ginsbourger, D., Helbert, C., Carraro, L.: Discrete mixtures of kernels for kriging-based optimization. Qual. Reliab. Eng. Int. 24(6), 681–691 (2008)CrossRefGoogle Scholar
  10. 10.
    Goel, T., Haftka, R.T., Shyy, W., Queipo, N.V.: Ensemble of surrogates. Struct. Multidisc. Optim. 33(3), 199–216 (2007)CrossRefGoogle Scholar
  11. 11.
    Gorissen, D., Dhaene, T., Turck, F.: Evolutionary model type selection for global surrogate modeling. J. Mach. Learn. Res. 10, 2039–2078 (2009)MathSciNetMATHGoogle Scholar
  12. 12.
    Hansen, L., Salamon, P.: Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 12(10), 993–1001 (1990)CrossRefGoogle Scholar
  13. 13.
    Hansen, N., Finck, S., Ros, R., Auger, A.: Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions. Tech. Rep. RR-6829, INRIA (2009). http://hal.inria.fr/inria-00362633/en/
  14. 14.
    Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer Series in Statistics. Springer, New York (2009)CrossRefGoogle Scholar
  15. 15.
    Jones, D.R.: A taxonomy of global optimization methods based on response surfaces. J. Global Optim. 21(4), 345–383 (2001)MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Jones, D., Schonlau, M., Welch, W.: Efficient global optimization of expensive black-box functions. J. Global Optim. 13(4), 455–492 (1998)MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Kass, R.E., Raftery, A.E.: Bayes factors. J. Am. Stat. Assoc. 90(430), 773–795 (1995)CrossRefMATHGoogle Scholar
  18. 18.
    Lenth, R.V.: Response-surface methods in R, using rsm. J. Stat. Softw. 32(7), 1–17 (2009)Google Scholar
  19. 19.
    Liaw, A., Wiener, M.: Classification and regression by randomForest. R News 2(3), 18–22 (2002)Google Scholar
  20. 20.
    Lim, D., Ong, Y.S., Jin, Y., Sendhoff, B.: A study on metamodeling techniques, ensembles, and multi-surrogates in evolutionary computation. In: Thierens, D., et al. (eds.) Proceedings of the 9th Annual Genetic and Evolutionary Computation Conference (GECCO 2007), pp. 1288–1295. ACM, New York (2007)CrossRefGoogle Scholar
  21. 21.
    Mersmann, O., Bischl, B.: soobench: Single Objective Optimization Benchmark Functions (2012). http://CRAN.R-project.org/package=soobench, R package version 1.0-73
  22. 22.
    Milborrow, S.: earth: Multivariate Adaptive Regression Spline Models (2012). http://CRAN.R-project.org/package=earth, R package version 3.2-3
  23. 23.
    Mockus, J.B., Tiesis, V., Zilinskas, A.: The application of bayesian methods for seeking the extremum. In: Dixon, L.C.W., Szegö, G.P. (eds.) Towards Global Optimization 2, pp. 117–129. Elsevier North-Holland, New York (1978)Google Scholar
  24. 24.
    Myers, R.H., Montgomery, D.C., Anderson-Cook, C.M.: Response Surface Methodology, 3rd edn. Wiley, Hoboken (2009)MATHGoogle Scholar
  25. 25.
    Picheny, V., Wagner, T., Ginsbourger, D.: A benchmark of kriging-based infill criteria for noisy optimization. Struct. Multidisc. Optim. 48(3), 607–626 (2013)CrossRefGoogle Scholar
  26. 26.
    R Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna (2012). http://www.R-project.org/ ISBN 3-900051-07-0
  27. 27.
    Ridgeway, G.: gbm: Generalized Boosted Regression Models (2012). http://CRAN.R-project.org/package=gbm, R package version 1.6-3.2
  28. 28.
    Roustant, O., Ginsbourger, D., Deville, Y.: DiceKriging, DiceOptim: two R packages for the analysis of computer experiments by kriging-based metamodeling and optimization. J. Stat. Softw. 51(1), 1–55 (2012). http://www.jstatsoft.org/v51/i01/Google Scholar
  29. 29.
    Santner, T., Williams, B., Notz, W.: The Sesign and Analysis of Computer Experiments. Springer, New York (2003)CrossRefGoogle Scholar
  30. 30.
    Sasena, M.J., Papalambros, P., Goovaerts, P.: Exploration of metamodeling sampling criteria for constrained global optimization. Eng. Optim. 34(3), 263–278 (2002)CrossRefGoogle Scholar
  31. 31.
    Shan, S., Wang, G.G.: Survey of modeling and optimization strategies to solve high-dimensional design problems with computationally-expensive black-box functions. Struct. Multi. Optim. 41(2), 219–241 (2010)MathSciNetCrossRefMATHGoogle Scholar
  32. 32.
    Sutton, R., Barto, A.: Reinforcement Learning: An Introduction. Cambridge University Press, Cambridge (1998)Google Scholar
  33. 33.
    Therneau, T.M., port by Brian Ripley, B.A.R.: rpart: Recursive Partitioning (2012). http://CRAN.R-project.org/package=rpart, R package version 3.1-54
  34. 34.
    Venables, W.N., Ripley, B.D.: Modern Applied Statistics with S, 4th edn. Springer, New York (2002)CrossRefMATHGoogle Scholar
  35. 35.
    Viana, F.A.C.: Multiple Surrogates for Prediction and Optimization. Ph.D. thesis, University of Florida (2011)Google Scholar
  36. 36.
    Wagner, T., Emmerich, M., Deutz, A., Ponweiser, W.: On expected-improvement criteria for model-based multi-objective optimization. In: Schaefer, R., Cotta, C., Kołodziej, J., Rudolph, G. (eds.) PPSN XI. LNCS, vol. 6238, pp. 718–727. Springer, Heidelberg (2010)Google Scholar
  37. 37.
    Wichard, J.D.: Model selection in an ensemble framework. In: International Joint Conference on Neural Networks, pp. 2187–2192 (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  1. 1.Institute of Machining Technology (ISF)TU Dortmund UniversityDortmundGermany
  2. 2.Faculty of StatisticsTU Dortmund UniversityDortmundGermany

Personalised recommendations