Advertisement

Algorithm portfolios for noisy optimization

  • Marie-Liesse Cauwet
  • Jialin LiuEmail author
  • Baptiste Rozière
  • Olivier Teytaud
Article

Abstract

Noisy optimization is the optimization of objective functions corrupted by noise. A portfolio of solvers is a set of solvers equipped with an algorithm selection tool for distributing the computational power among them. Portfolios are widely and successfully used in combinatorial optimization. In this work, we study portfolios of noisy optimization solvers. We obtain mathematically proved performance (in the sense that the portfolio performs nearly as well as the best of its solvers) by an ad hoc portfolio algorithm dedicated to noisy optimization. A somehow surprising result is that it is better to compare solvers with some lag, i.e., propose the current recommendation of best solver based on their performance earlier in the run. An additional finding is a principled method for distributing the computational power among solvers in the portfolio.

Keywords

Black-box noisy optimization Algorithm selection Simple regret 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    AsteteMorales, S., Liu, J., Teytaud, O.: Loglog convergence for noisy optimization. In: Proceedings of EA 2013, Evolutionary Algorithms 2013, pp. 16–28. Bordeaux, France. https://hal.inria.fr/hal01107772 (2013)
  2. 2.
    Spall, J.: Adaptive stochastic approximation by the simultaneous perturbation method. IEEE Trans. Autom. Control 45(10), 1839–1853 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Spall, J.: Feedback and weighting mechanisms for improving jacobian estimates in the adaptive simultaneous perturbation algorithm. IEEE Trans. Autom. Control 54(6), 1216–1229 (2009)MathSciNetCrossRefGoogle Scholar
  4. 4.
    AsteteMorales, S., Cauwet, M.L., Liu, J., Teytaud, O.: Simple and cumulative regret for continuous noisy optimization. Theoretical Computer Science (2015)Google Scholar
  5. 5.
    Rolet, P., Teytaud, O.: Adaptive noisy optimization. In: EvoStar, pp. 592–601. Istambul, Turquie. http://hal.inria.fr/inria00459017 (2010)
  6. 6.
    Cauwet, M.L., Liu, J., Teytaud, O.: Algorithm portfolios for noisy optimization: compare solvers early. In: Learning and Intelligent Optimization Conference, Florida, USA (2014)Google Scholar
  7. 7.
    Sutton, R.S., Barto, A.G.: Reinforcement learning: an introduction. MIT Press, Cambridge, MA (1998)Google Scholar
  8. 8.
    Jebalia, M., Auger, A., Hansen, N.: Log linear convergence and divergence of the scale-invariant (1+1)-ES in noisy environments. Algorithmica (2010)Google Scholar
  9. 9.
    Beyer, H.G.: Actuator noise in recombinant evolution strategies on general quadratic fitness models. In: Deb, K. (ed.) Genetic and Evolutionary Computation, GECCO 2004. Volume 3102 of Lecture Notes in Computer Science, pp 654–665. Springer, Berlin Heidelberg (2004)Google Scholar
  10. 10.
    Prügel-Bennett, A.: Benefits of a population: five mechanisms that advantage population-based algorithms. IEEE Trans. Evol. Comput. 14(4), 500–517 (2010)CrossRefGoogle Scholar
  11. 11.
    Fabian, V.: Stochastic approximation of minima with improved asymptotic speed. Ann. Math. Stat. 38, 191–200 (1967)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Chen, H.: Lower rate of convergence for locating the maximum of a function. Ann. Stat. 16, 1330–1334 (1988)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Beyer, H.G.: The Theory of Evolutions Strategies. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  14. 14.
    Conn, A., Scheinberg, K., Toint, P.: Recent progress in unconstrained nonlinear optimization without derivatives. Math. Program. 79(1–3), 397–414 (1997)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Arnold, D.V., Beyer, H.G.: A general noise model and its effects on evolution strategy performance. IEEE Trans. Evol. Comput. 10(4), 380–391 (2006)CrossRefGoogle Scholar
  16. 16.
    Sendhoff, B., Beyer, H.G., Olhofer, M.: The influence of stochastic quality functions on evolutionary search. Recent Advances in Simulated Evolution and Learning, ser. Advances in Natural Computation pp. 152–172 (2004)Google Scholar
  17. 17.
    Jin, Y., Branke, J.: Evolutionary optimization in uncertain environments. a survey. IEEE Trans. Evol. Comput. 9(3), 303–317 (2005)CrossRefGoogle Scholar
  18. 18.
    Shamir, O.: On the complexity of bandit linear optimization. In: Proceedings of The 28th Conference on Learning Theory, COLT 2015, Paris, France, pp 1523–1551 (2015)Google Scholar
  19. 19.
    Fabian, V.: Stochastic approximation. In: Rustagi, J. (ed.) Optimization methods in Statistics; proceedings Symposium, Ohio State University, pp 439–470. Academic Press, New York (1971)Google Scholar
  20. 20.
    Kotthoff, L.: Algorithm selection for combinatorial search problems: a survey. CoRR abs/1210, 7959 (2012)Google Scholar
  21. 21.
    Utgoff, P.E.: Perceptron trees: a case study in hybrid concept representations. In: National Conference on Artificial Intelligence, pp 601–606 (1988)Google Scholar
  22. 22.
    Aha, D.W.: Generalizing from case studies: a case study. In: Proceedings of the 9th International Workshop on Machine Learning, Morgan Kaufmann Publishers Inc., pp 1–10 (1992)Google Scholar
  23. 23.
    Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)CrossRefGoogle Scholar
  24. 24.
    Borrett, J., Tsang, E.P.K.: Towards a formal framework for comparing constraint satisfaction problem formulations. Technical report, University of Essex, Department of Computer Science (1996)Google Scholar
  25. 25.
    Vassilevska, V., Williams, R., Woo, S.L.M.: Confronting hardness using a hybrid approach. In: Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, pp 1–10. ACM (2006)Google Scholar
  26. 26.
    Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: SATzilla: Portfolio-based algorithm selection for SAT. J. Artif. Intell. Res. (JAIR) 32, 565–606 (2008)zbMATHGoogle Scholar
  27. 27.
    Samulowitz, H., Memisevic, R.: Learning to solve qbf. In: Proceedings of the 22nd National Conference on Artificial Intelligence, pp 255–260. AAAI (2007)Google Scholar
  28. 28.
    Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: Hydra-mip: automated algorithm configuration and selection for mixed integer programming. In: RCRA Workshop on Experimental Evaluation of Algorithms for Solving Problems with Combinatorial Explosion at the International Joint Conference on Artificial Intelligence (IJCAI) (2011)Google Scholar
  29. 29.
    Pulina, L., Tacchella, A.: A self-adaptive multi-engine solver for quantified boolean formulas. Constraints 14(1), 80–116 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Kadioglu, S., Malitsky, Y., Sabharwal, A., Samulowitz, H., Sellmann, M.: Algorithm selection and scheduling. In: 17th International Conference on Principles and Practice of Constraint Programming, pp 454–469 (2011)Google Scholar
  31. 31.
    Gagliolo, M., Schmidhuber, J.: A neural network model for inter-problem adaptive online time allocation. In: 15th International Conference on Artificial Neural Networks: Formal Models and Their Applications, pp 7–12. Springer (2005)Google Scholar
  32. 32.
    Gagliolo, M., Schmidhuber, J.: Learning dynamic algorithm portfolios. Ann. Math. Artif. Intell 47(3–4), 295–328 (2006)MathSciNetzbMATHGoogle Scholar
  33. 33.
    Armstrong, W., Christen, P., McCreath, E., Rendell, A.P.: Dynamic algorithm selection using reinforcement learning. In: International Workshop on Integrating AI and Data Mining, pp 18–25 (2006)Google Scholar
  34. 34.
    Hamadi, Y.: Combinatorial search: from algorithms to systems. Springer (2013)Google Scholar
  35. 35.
    Lai, T., Robbins, H.: Asymptotically efficient adaptive allocation rules. Adv. Appl. Math. 6, 4–22 (1985)MathSciNetCrossRefzbMATHGoogle Scholar
  36. 36.
    Auer, P.: Using confidence bounds for exploitation-exploration trade-offs. J. Mach. Learn. Res. 3, 397–422 (2003)MathSciNetzbMATHGoogle Scholar
  37. 37.
    Bubeck, S., Munos, R., Stoltz, G.: Pure exploration in multi-armed bandits problems. In: ALT., pp 23–37 (2009)Google Scholar
  38. 38.
    Kocsis, L., Szepesvari, C.: Discounted-UCB. In: 2nd Pascal-Challenge Workshop (2006)Google Scholar
  39. 39.
    Grigoriadis, M.D., Khachiyan, L.G.: A sublinear-time randomized approximation algorithm for matrix games. Oper. Res. Lett. 18(2), 53–58 (1995)MathSciNetCrossRefzbMATHGoogle Scholar
  40. 40.
    St-Pierre, D.L., Liu, J.: Differential evolution algorithm applied to non-stationary bandit problem. In: IEEE Congress on Evolutionary Computation (IEEE CEC), Beijing, China (2014)Google Scholar
  41. 42.
    Storn, R., Price, K.: Differential evolution: a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 11(4), 341–359 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  42. 43.
    Coulom, R.: Clop: confident local optimization for noisy black-box parameter tuning. In: Advances in Computer Games, pp 146–157. Springer, Berlin Heidelberg (2012)Google Scholar
  43. 44.
    Weinstein, A., Littman, M.L.: Bandit-based planning and learning in continuous-action markov decision processes. In: Proceedings of the Twenty-Second International Conference on Automated Planning and Scheduling, ICAPS 2012, Atibaia, Sȧo Paulo, Brazil, June 25–19 (2012)Google Scholar
  44. 45.
    Couetoux, A.: Monte Carlo tree search for continuous and stochastic sequential decision making problems. Theses, Université Paris Sud - Paris XI (2013)Google Scholar
  45. 46.
    Liu, J., Teytaud, O.: Meta online learning: experiments on a unit commitment problem. In: European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (2014)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Marie-Liesse Cauwet
    • 1
  • Jialin Liu
    • 1
    Email author
  • Baptiste Rozière
    • 1
  • Olivier Teytaud
    • 1
  1. 1.TAO, INRIA-CNRS-LRIUniversity Paris-SudGif-sur-YvetteFrance

Personalised recommendations