RAMBO: Resource-Aware Model-Based Optimization with Scheduling for Heterogeneous Runtimes and a Comparison with Asynchronous Model-Based Optimization

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10556)


Sequential model-based optimization is a popular technique for global optimization of expensive black-box functions. It uses a regression model to approximate the objective function and iteratively proposes new interesting points. Deviating from the original formulation, it is often indispensable to apply parallelization to speed up the computation. This is usually achieved by evaluating as many points per iteration as there are workers available. However, if runtimes of the objective function are heterogeneous, resources might be wasted by idle workers. Our new knapsack-based scheduling approach aims at increasing the effectiveness of parallel optimization by efficient resource utilization. Derived from an extra regression model we use runtime predictions of point evaluations to efficiently map evaluations to workers and reduce idling. We compare our approach to five established parallelization strategies on a set of continuous functions with heterogeneous runtimes. Our benchmark covers comparisons of synchronous and asynchronous model-based approaches and investigates the scalability.


Black-box optimization Model-based optimization Global optimization Resource-aware scheduling Performance management Parallelization 



J. Richter and H. Kotthaus — These authors contributed equally. This work was partly supported by Deutsche Forschungsgemeinschaft (DFG) within the Collaborative Research Center SFB 876, A3 and by Competence Network for Technical, Scientific High Performance Computing in Bavaria (KONWIHR) in the project “Implementierung und Evaluation eines Verfahrens zur automatischen, massiv-parallelen Modellselektion im Maschinellen Lernen”.


  1. 1.
    Ansótegui, C., Malitsky, Y., Samulowitz, H., Sellmann, M., Tierney, K.: Model-based genetic algorithms for algorithm configuration. In: International Joint Conference on Artificial Intelligence, pp. 733–739 (2015)Google Scholar
  2. 2.
    Bischl, B., Lang, M., Kotthoff, L., Schiffner, J., Richter, J., Studerus, E., Casalicchio, G., Jones, Z.M.: mlr: Machine learning in R. J. Mach. Learn. Res. 17(170), 1–5 (2016)zbMATHMathSciNetGoogle Scholar
  3. 3.
    Bischl, B., Richter, J., Bossek, J., Horn, D., Thomas, J., Lang, M.: mlrMBO: A modular framework for model-based optimization of expensive black-box functions. arXiv pre-print (2017).
  4. 4.
    Bischl, B., Wessing, S., Bauer, N., Friedrichs, K., Weihs, C.: MOI-MBO: multiobjective infill for parallel model-based optimization. In: Pardalos, P.M., Resende, M.G.C., Vogiatzis, C., Walteros, J.L. (eds.) LION 2014. LNCS, vol. 8426, pp. 173–186. Springer, Cham (2014). doi: 10.1007/978-3-319-09584-4_17 Google Scholar
  5. 5.
    Borchers, H.: adagio: Discrete and Global Optimization Routines (2016). R package version 0.6.5.
  6. 6.
    Bossek, J.: smoof: Single and Multi-Objective Optimization Test Functions (2016). R package version 1.4.
  7. 7.
    Chevalier, C., Ginsbourger, D.: Fast computation of the multi-points expected improvement with applications in batch selection. In: Nicosia, G., Pardalos, P. (eds.) LION 2013. LNCS, vol. 7997, pp. 59–69. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-44973-4_7 CrossRefGoogle Scholar
  8. 8.
    Ginsbourger, D., Janusevskis, J., Le Riche, R.: Dealing with asynchronicity in parallel Gaussian process based global optimization. In: 4th International Conference of the ERCIM WG on Computing & Statistics (ERCIM 2011), pp. 1–27 (2011)Google Scholar
  9. 9.
    Ginsbourger, D., Le Riche, R., Carraro, L.: Kriging is well-suited to parallelize optimization. In: Tenne, Y., Goh, C.K. (eds.) Computational Intelligence in Expensive Optimization Problems, pp. 131–162. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-10701-6_6 CrossRefGoogle Scholar
  10. 10.
    Hutter, F., Hoos, H.H., Leyton-Brown, K.: Sequential model-based optimization for general algorithm configuration. In: Coello, C.A.C. (ed.) LION 2011. LNCS, vol. 6683, pp. 507–523. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-25566-3_40 CrossRefGoogle Scholar
  11. 11.
    Hutter, F., Hoos, H.H., Leyton-Brown, K.: Parallel algorithm configuration. In: Hamadi, Y., Schoenauer, M. (eds.) LION 2012. LNCS, pp. 55–70. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-34413-8_5 CrossRefGoogle Scholar
  12. 12.
    Janusevskis, J., Le Riche, R., Ginsbourger, D.: Parallel expected improvements for global optimization: summary, bounds and speed-up. Technical report (2011).
  13. 13.
    Janusevskis, J., Le Riche, R., Ginsbourger, D., Girdziusas, R.: Expected improvements for the asynchronous parallel global optimization of expensive functions: potentials and challenges. In: Hamadi, Y., Schoenauer, M. (eds.) LION 2012. LNCS, pp. 413–418. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-34413-8_37 CrossRefGoogle Scholar
  14. 14.
    Jones, D.R.: A taxonomy of global optimization methods based on response surfaces. J. Global Optim. 21(4), 345–383 (2001)CrossRefzbMATHMathSciNetGoogle Scholar
  15. 15.
    Jones, D.R., Schonlau, M., Welch, W.J.: Efficient global optimization of expensive black-box functions. J. Global Optim. 13(4), 455–492 (1998)CrossRefzbMATHMathSciNetGoogle Scholar
  16. 16.
    Kotthaus, H., Korb, I., Lang, M., Bischl, B., Rahnenführer, J., Marwedel, P.: Runtime and memory consumption analyses for machine learning R programs. J. Stat. Comput. Simul. 85(1), 14–29 (2015)CrossRefMathSciNetGoogle Scholar
  17. 17.
    Kotthaus, H., Korb, I., Marwedel, P.: Performance analysis for parallel R programs: towards efficient resource utilization. Technical report 01/2015, Department of Computer Science 12, TU Dortmund University (2015)Google Scholar
  18. 18.
    Lang, M., Bischl, B., Surmann, D.: batchtools: Tools for R to work on batch systems. J. Open Source Softw. 2(10) (2017)Google Scholar
  19. 19.
    Richter, J., Kotthaus, H., Bischl, B., Marwedel, P., Rahnenführer, J., Lang, M.: Faster model-based optimization through resource-aware scheduling strategies. In: Festa, P., Sellmann, M., Vanschoren, J. (eds.) LION 2016. LNCS, vol. 10079, pp. 267–273. Springer, Cham (2016). doi: 10.1007/978-3-319-50349-3_22 CrossRefGoogle Scholar
  20. 20.
    Roustant, O., Ginsbourger, D., Deville, Y.: DiceKriging, DiceOptim: two R packages for the analysis of computer experiments by Kriging-based metamodeling and optimization. J. Stat. Softw. 51(1), 1–55 (2012)CrossRefGoogle Scholar
  21. 21.
    Snoek, J., Larochelle, H., Adams, R.P.: Practical Bayesian optimization of machine learning algorithms. In: Advances in Neural Information Processing Systems, vol. 25. pp. 2951–2959. Curran Associates, Inc. (2012)Google Scholar
  22. 22.
    Strongin, R.G., Sergeyev, Y.D.: Global Optimization with Non-Convex Constraints. Kluwer Academic Publishers, Dordrecht (2000)CrossRefzbMATHGoogle Scholar
  23. 23.
    Zhigljavsky, A., Žilinskas, A.: Stochastic Global Optimization. Springer, New York (2008). doi: 10.1007/978-0-387-74740-8 zbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Computer Science 12TU Dortmund UniversityDortmundGermany
  2. 2.Department of StatisticsTU Dortmund UniversityDortmundGermany
  3. 3.Department of StatisticsLMU MünchenMunichGermany

Personalised recommendations