Algorithm Portfolios for Noisy Optimization: Compare Solvers Early

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8426)


Noisy optimization is the optimization of objective functions corrupted by noise. A portfolio of algorithms is a set of algorithms equipped with an algorithm selection tool for distributing the computational power among them. We study portfolios of noisy optimization solvers, show that different settings lead to different performances, obtain mathematically proved performance (in the sense that the portfolio performs nearly as well as the best of its’ algorithms) by an ad hoc selection algorithm dedicated to noisy optimization. A somehow surprising result is that it is better to compare solvers with some lag; i.e., recommend the current recommendation of the best solver, selected from a comparison based on their recommendations earlier in the run.


Noisy Optimization Algorithm Portfolio Tool Selection Algorithms Simple Regret Portfolio Budget 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Aha, D.W.: Generalizing from case studies: a case study. In: Proceedings of the 9th International Workshop on Machine Learning, pp. 1–10. Morgan Kaufmann Publishers Inc. (1992)Google Scholar
  2. 2.
    Armstrong, W., Christen, P., McCreath, E., Rendell, A.P.: Dynamic algorithm selection using reinforcement learning. In: International Workshop on Integrating AI and Data Mining, pp. 18–25 (2006)Google Scholar
  3. 3.
    Arnold, D.V., Beyer, H.-G.: A general noise model and its effects on evolution strategy performance. IEEE Trans. Evol. Comput. 10(4), 380–391 (2006)CrossRefGoogle Scholar
  4. 4.
    Auer, P.: Using confidence bounds for exploitation-exploration trade-offs. J. Mach. Learn. Res. 3, 397–422 (2003)zbMATHMathSciNetGoogle Scholar
  5. 5.
    Auer, P., Cesa-Bianchi, N., Freund, Y., Schapire, R.E.: Gambling in a rigged casino: the adversarial multi-armed bandit problem. In: Proceedings of the 36th Annual Symposium on Foundations of Computer Science, pp. 322–331. IEEE Computer Society Press, Los Alamitos (1995)Google Scholar
  6. 6.
    Beyer, H.-G.: The Theory of Evolutions Strategies. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  7. 7.
    Borrett, J., Tsang, E.P.K.: Towards a formal framework for comparing constraint satisfaction problem formulations. Technical report, University of Essex, Department of Computer Science (1996)Google Scholar
  8. 8.
    Bubeck, S., Munos, R., Stoltz, G.: Pure exploration in multi-armed bandits problems. In: Gavaldà, R., Lugosi, G., Zeugmann, T., Zilles, S. (eds.) ALT 2009. LNCS, vol. 5809, pp. 23–37. Springer, Heidelberg (2009) CrossRefGoogle Scholar
  9. 9.
    Chen, H.: Lower rate of convergence for locating the maximum of a function. Ann. Stat. 16, 1330–1334 (1988)CrossRefzbMATHGoogle Scholar
  10. 10.
    Cicirello, V.A., Smith, S.F.: The max k-armed bandit: a new model of exploration applied to search heuristic selection. In: Proceedings of the 20th National Conference on Artificial Intelligence, pp. 1355–1361. AAAI Press (2005)Google Scholar
  11. 11.
    Conn, A., Scheinberg, K., Toint, P.: Recent progress in unconstrained nonlinear optimization without derivatives. Math. Program. 79(1–3), 397–414 (1997)zbMATHMathSciNetGoogle Scholar
  12. 12.
    Fabian, V.: Stochastic approximation of minima with improved asymptotic speed. Ann. Math. Stat. 38, 191–200 (1967)CrossRefzbMATHMathSciNetGoogle Scholar
  13. 13.
    Fabian, V.: Stochastic Approximation. SLP. Department of Statistics and Probability, Michigan State University (1971)Google Scholar
  14. 14.
    Gagliolo, M., Schmidhuber, J.: A neural network model for inter-problem adaptive online time allocation. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds.) ICANN 2005. LNCS, vol. 3697, pp. 7–12. Springer, Heidelberg (2005) Google Scholar
  15. 15.
    Gagliolo, M., Schmidhuber, J.: Learning dynamic algorithm portfolios. Ann. Math. Artif. Intell. 47, 295–328 (2006)CrossRefzbMATHMathSciNetGoogle Scholar
  16. 16.
    Grigoriadis, M.D., Khachiyan, L.G.: A sublinear-time randomized approximation algorithm for matrix games. Oper. Res. Lett. 18(2), 53–58 (1995)CrossRefzbMATHMathSciNetGoogle Scholar
  17. 17.
    Hamadi, Y.: Search: from algorithms to systems. Ph.D. thesis, Université Paris-Sud (2013)Google Scholar
  18. 18.
    Horvitz, E., Ruan, Y., Gomes, C.P., Kautz, H.A., Selman, B., Chickering, D.M.: A bayesian approach to tackling hard computational problems. In: Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence, pp. 235–244. Morgan Kaufmann Publishers Inc. (2001)Google Scholar
  19. 19.
    Jebalia, M., Auger, A., Hansen, N.: Log-linear convergence and divergence of the scale-invariant (1+1)-es in noisy environments. Algorithmica, pp. 1–36. Springer, New York (2010) Google Scholar
  20. 20.
    Jin, Y., Branke, J.: Evolutionary optimization in uncertain environments. A survey. IEEE Trans. Evol. Comput. 9(3), 303–317 (2005)CrossRefGoogle Scholar
  21. 21.
    Kadioglu, S., Malitsky, Y., Sabharwal, A., Samulowitz, H., Sellmann, M.: Algorithm selection and scheduling. In: Lee, J. (ed.) CP 2011. LNCS, vol. 6876, pp. 454–469. Springer, Heidelberg (2011) CrossRefGoogle Scholar
  22. 22.
    Kocsis, L., Szepesvari, C.: Discounted-UCB. In: 2nd Pascal-Challenge Workshop (2006)Google Scholar
  23. 23.
    Kotthoff, L.: Algorithm selection for combinatorial search problems: a survey. CoRR, abs/1210.7959 (2012)Google Scholar
  24. 24.
    Lai, T., Robbins, H.: Asymptotically efficient adaptive allocation rules. Adv. Appl. Math. 6, 4–22 (1985)CrossRefzbMATHMathSciNetGoogle Scholar
  25. 25.
    Nudelman, E., Leyton-Brown, K., H. Hoos, H., Devkar, A., Shoham, Y.: Understanding random SAT: beyond the clauses-to-variables ratio. In: Wallace, M. (ed.) CP 2004. LNCS, vol. 3258, pp. 438–452. Springer, Heidelberg (2004) CrossRefGoogle Scholar
  26. 26.
    Pulina, L., Tacchella, A.: A self-adaptive multi-engine solver for quantified boolean formulas. Constraints 14(1), 80–116 (2009)CrossRefzbMATHMathSciNetGoogle Scholar
  27. 27.
    Samulowitz, H., Memisevic, R.: Learning to solve QBF. In: Proceedings of the 22nd National Conference on Artificial Intelligence, pp. 255–260. AAAI (2007)Google Scholar
  28. 28.
    Sendhoff, B., Beyer, H.-G., Olhofer, M.: The influence of stochastic quality functions on evolutionary search. In: Tan, K., Lim, M., Yao, X., Wang, L. (eds.) Recent Advances in Simulated Evolution and Learning. Advances in Natural Computation, pp. 152–172. World Scientific, New York (2004) CrossRefGoogle Scholar
  29. 29.
    Shamir, O.: On the complexity of bandit and derivative-free stochastic convex optimization. CoRR, abs/1209.2388 (2012)Google Scholar
  30. 30.
    Streeter, M.J., Golovin, D., Smith, S.F.: Restart schedules for ensembles of problem instances. In: AAAI 2007, pp. 1204–1210. AAAI Press (2007)Google Scholar
  31. 31.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
  32. 32.
    Utgoff, P.E.: Perceptron trees: a case study in hybrid concept representations. In: National Conference on Artificial Intelligence, pp. 601–606 (1988)Google Scholar
  33. 33.
    Vassilevska, V., Williams, R., Woo, S.L.M.: Confronting hardness using a hybrid approach. In: Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithm, pp. 1–10. ACM (2006)Google Scholar
  34. 34.
    Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)CrossRefGoogle Scholar
  35. 35.
    Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: Hydra-mip: automated algorithm configuration and selection for mixed integer programming. In: RCRA Workshop on Experimental Evaluation of Algorithms for Solving Problems with Combinatorial Explosion at the International Joint Conference on Artificial Intelligence (IJCAI) (2011)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.TAO, INRIA-CNRS-LRIUniversity of Paris-SudGif-sur-YvetteFrance

Personalised recommendations