Skip to main content
Log in

Algorithm portfolios for noisy optimization

  • Published:
Annals of Mathematics and Artificial Intelligence Aims and scope Submit manuscript

Abstract

Noisy optimization is the optimization of objective functions corrupted by noise. A portfolio of solvers is a set of solvers equipped with an algorithm selection tool for distributing the computational power among them. Portfolios are widely and successfully used in combinatorial optimization. In this work, we study portfolios of noisy optimization solvers. We obtain mathematically proved performance (in the sense that the portfolio performs nearly as well as the best of its solvers) by an ad hoc portfolio algorithm dedicated to noisy optimization. A somehow surprising result is that it is better to compare solvers with some lag, i.e., propose the current recommendation of best solver based on their performance earlier in the run. An additional finding is a principled method for distributing the computational power among solvers in the portfolio.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. AsteteMorales, S., Liu, J., Teytaud, O.: Loglog convergence for noisy optimization. In: Proceedings of EA 2013, Evolutionary Algorithms 2013, pp. 16–28. Bordeaux, France. https://hal.inria.fr/hal01107772 (2013)

  2. Spall, J.: Adaptive stochastic approximation by the simultaneous perturbation method. IEEE Trans. Autom. Control 45(10), 1839–1853 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  3. Spall, J.: Feedback and weighting mechanisms for improving jacobian estimates in the adaptive simultaneous perturbation algorithm. IEEE Trans. Autom. Control 54(6), 1216–1229 (2009)

    Article  MathSciNet  Google Scholar 

  4. AsteteMorales, S., Cauwet, M.L., Liu, J., Teytaud, O.: Simple and cumulative regret for continuous noisy optimization. Theoretical Computer Science (2015)

  5. Rolet, P., Teytaud, O.: Adaptive noisy optimization. In: EvoStar, pp. 592–601. Istambul, Turquie. http://hal.inria.fr/inria00459017 (2010)

  6. Cauwet, M.L., Liu, J., Teytaud, O.: Algorithm portfolios for noisy optimization: compare solvers early. In: Learning and Intelligent Optimization Conference, Florida, USA (2014)

  7. Sutton, R.S., Barto, A.G.: Reinforcement learning: an introduction. MIT Press, Cambridge, MA (1998)

    Google Scholar 

  8. Jebalia, M., Auger, A., Hansen, N.: Log linear convergence and divergence of the scale-invariant (1+1)-ES in noisy environments. Algorithmica (2010)

  9. Beyer, H.G.: Actuator noise in recombinant evolution strategies on general quadratic fitness models. In: Deb, K. (ed.) Genetic and Evolutionary Computation, GECCO 2004. Volume 3102 of Lecture Notes in Computer Science, pp 654–665. Springer, Berlin Heidelberg (2004)

  10. Prügel-Bennett, A.: Benefits of a population: five mechanisms that advantage population-based algorithms. IEEE Trans. Evol. Comput. 14(4), 500–517 (2010)

    Article  Google Scholar 

  11. Fabian, V.: Stochastic approximation of minima with improved asymptotic speed. Ann. Math. Stat. 38, 191–200 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  12. Chen, H.: Lower rate of convergence for locating the maximum of a function. Ann. Stat. 16, 1330–1334 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  13. Beyer, H.G.: The Theory of Evolutions Strategies. Springer, Heidelberg (2001)

    Book  Google Scholar 

  14. Conn, A., Scheinberg, K., Toint, P.: Recent progress in unconstrained nonlinear optimization without derivatives. Math. Program. 79(1–3), 397–414 (1997)

    MathSciNet  MATH  Google Scholar 

  15. Arnold, D.V., Beyer, H.G.: A general noise model and its effects on evolution strategy performance. IEEE Trans. Evol. Comput. 10(4), 380–391 (2006)

    Article  Google Scholar 

  16. Sendhoff, B., Beyer, H.G., Olhofer, M.: The influence of stochastic quality functions on evolutionary search. Recent Advances in Simulated Evolution and Learning, ser. Advances in Natural Computation pp. 152–172 (2004)

  17. Jin, Y., Branke, J.: Evolutionary optimization in uncertain environments. a survey. IEEE Trans. Evol. Comput. 9(3), 303–317 (2005)

    Article  Google Scholar 

  18. Shamir, O.: On the complexity of bandit linear optimization. In: Proceedings of The 28th Conference on Learning Theory, COLT 2015, Paris, France, pp 1523–1551 (2015)

  19. Fabian, V.: Stochastic approximation. In: Rustagi, J. (ed.) Optimization methods in Statistics; proceedings Symposium, Ohio State University, pp 439–470. Academic Press, New York (1971)

  20. Kotthoff, L.: Algorithm selection for combinatorial search problems: a survey. CoRR abs/1210, 7959 (2012)

  21. Utgoff, P.E.: Perceptron trees: a case study in hybrid concept representations. In: National Conference on Artificial Intelligence, pp 601–606 (1988)

  22. Aha, D.W.: Generalizing from case studies: a case study. In: Proceedings of the 9th International Workshop on Machine Learning, Morgan Kaufmann Publishers Inc., pp 1–10 (1992)

  23. Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)

    Article  Google Scholar 

  24. Borrett, J., Tsang, E.P.K.: Towards a formal framework for comparing constraint satisfaction problem formulations. Technical report, University of Essex, Department of Computer Science (1996)

  25. Vassilevska, V., Williams, R., Woo, S.L.M.: Confronting hardness using a hybrid approach. In: Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, pp 1–10. ACM (2006)

  26. Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: SATzilla: Portfolio-based algorithm selection for SAT. J. Artif. Intell. Res. (JAIR) 32, 565–606 (2008)

    MATH  Google Scholar 

  27. Samulowitz, H., Memisevic, R.: Learning to solve qbf. In: Proceedings of the 22nd National Conference on Artificial Intelligence, pp 255–260. AAAI (2007)

  28. Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: Hydra-mip: automated algorithm configuration and selection for mixed integer programming. In: RCRA Workshop on Experimental Evaluation of Algorithms for Solving Problems with Combinatorial Explosion at the International Joint Conference on Artificial Intelligence (IJCAI) (2011)

  29. Pulina, L., Tacchella, A.: A self-adaptive multi-engine solver for quantified boolean formulas. Constraints 14(1), 80–116 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  30. Kadioglu, S., Malitsky, Y., Sabharwal, A., Samulowitz, H., Sellmann, M.: Algorithm selection and scheduling. In: 17th International Conference on Principles and Practice of Constraint Programming, pp 454–469 (2011)

  31. Gagliolo, M., Schmidhuber, J.: A neural network model for inter-problem adaptive online time allocation. In: 15th International Conference on Artificial Neural Networks: Formal Models and Their Applications, pp 7–12. Springer (2005)

  32. Gagliolo, M., Schmidhuber, J.: Learning dynamic algorithm portfolios. Ann. Math. Artif. Intell 47(3–4), 295–328 (2006)

    MathSciNet  MATH  Google Scholar 

  33. Armstrong, W., Christen, P., McCreath, E., Rendell, A.P.: Dynamic algorithm selection using reinforcement learning. In: International Workshop on Integrating AI and Data Mining, pp 18–25 (2006)

  34. Hamadi, Y.: Combinatorial search: from algorithms to systems. Springer (2013)

  35. Lai, T., Robbins, H.: Asymptotically efficient adaptive allocation rules. Adv. Appl. Math. 6, 4–22 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  36. Auer, P.: Using confidence bounds for exploitation-exploration trade-offs. J. Mach. Learn. Res. 3, 397–422 (2003)

    MathSciNet  MATH  Google Scholar 

  37. Bubeck, S., Munos, R., Stoltz, G.: Pure exploration in multi-armed bandits problems. In: ALT., pp 23–37 (2009)

  38. Kocsis, L., Szepesvari, C.: Discounted-UCB. In: 2nd Pascal-Challenge Workshop (2006)

  39. Grigoriadis, M.D., Khachiyan, L.G.: A sublinear-time randomized approximation algorithm for matrix games. Oper. Res. Lett. 18(2), 53–58 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  40. St-Pierre, D.L., Liu, J.: Differential evolution algorithm applied to non-stationary bandit problem. In: IEEE Congress on Evolutionary Computation (IEEE CEC), Beijing, China (2014)

  41. Storn, R., Price, K.: Differential evolution: a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 11(4), 341–359 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  42. Coulom, R.: Clop: confident local optimization for noisy black-box parameter tuning. In: Advances in Computer Games, pp 146–157. Springer, Berlin Heidelberg (2012)

  43. Weinstein, A., Littman, M.L.: Bandit-based planning and learning in continuous-action markov decision processes. In: Proceedings of the Twenty-Second International Conference on Automated Planning and Scheduling, ICAPS 2012, Atibaia, Sȧo Paulo, Brazil, June 25–19 (2012)

  44. Couetoux, A.: Monte Carlo tree search for continuous and stochastic sequential decision making problems. Theses, Université Paris Sud - Paris XI (2013)

  45. Liu, J., Teytaud, O.: Meta online learning: experiments on a unit commitment problem. In: European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (2014)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jialin Liu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cauwet, ML., Liu, J., Rozière, B. et al. Algorithm portfolios for noisy optimization. Ann Math Artif Intell 76, 143–172 (2016). https://doi.org/10.1007/s10472-015-9486-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10472-015-9486-2

Keywords

Navigation