Skip to main content
Log in

Continuous Lunches Are Free Plus the Design of Optimal Optimization Algorithms

  • Published:
Algorithmica Aims and scope Submit manuscript

Abstract

This paper analyses extensions of No-Free-Lunch (NFL) theorems to countably infinite and uncountable infinite domains and investigates the design of optimal optimization algorithms.

The original NFL theorem due to Wolpert and Macready states that, for finite search domains, all search heuristics have the same performance when averaged over the uniform distribution over all possible functions. For infinite domains the extension of the concept of distribution over all possible functions involves measurability issues and stochastic process theory. For countably infinite domains, we prove that the natural extension of NFL theorems, for the current formalization of probability, does not hold, but that a weaker form of NFL does hold, by stating the existence of non-trivial distributions of fitness leading to equal performances for all search heuristics. Our main result is that for continuous domains, NFL does not hold. This free-lunch theorem is based on the formalization of the concept of random fitness functions by means of random fields.

We also consider the design of optimal optimization algorithms for a given random field, in a black-box setting, namely, a complexity measure based solely on the number of requests to the fitness function. We derive an optimal algorithm based on Bellman’s decomposition principle, for a given number of iterates and a given distribution of fitness functions. We also approximate this algorithm thanks to a Monte-Carlo planning algorithm close to the UCT (Upper Confidence Trees) algorithm, and provide experimental results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Auger, A., Schoenauer, M., Teytaud, O.: Local and global order 3/2 convergence of a surrogate evolutionary algorithm. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2005), pp. 857–864. Assoc. Comput. Mach., New York (2005)

    Chapter  Google Scholar 

  2. Bellman, R.: Dynamic Programming. Princeton University Press, Princeton (1957)

    Google Scholar 

  3. Bertsekas, D.: Dynamic Programming and Optimal Control. Athena Scientific, Belmont (1995)

    MATH  Google Scholar 

  4. Billingsley, P.: Probability and Measure. Wiley, New York (1986)

    MATH  Google Scholar 

  5. Booker, A., Dennis, Jr., J.E., Frank, P., Serafini, D., Torczon, V., Trosset, M.: A rigorous framework for optimization of expensive functions by surrogates. Struct. Optim. 17(1), 1–13 (1999)

    Article  Google Scholar 

  6. Conn, A., Scheinberg, K., Toint, L.: Recent progress in unconstrained nonlinear optimization without derivatives. Math. Program. 79, 397–414 (1997)

    MathSciNet  Google Scholar 

  7. Corne, D.W., Knowles, J.D.: Some multiobjective optimizers are better than others. In: Proceedings of the 2003 IEEE Congress on Evolutionary Computation (CEC 2003), pp. 2506–2512. IEEE Press, New York (2003)

    Chapter  Google Scholar 

  8. Culberson, J.C.: On the futility of blind search: an algorithmic view of “No Free Lunch”. Evol. Comput. 6(2), 109–127 (1998)

    Article  Google Scholar 

  9. Daniell, P.J.: Integrals in an infinite number of dimensions. Ann. Math. 20, 281–288 (1919)

    Article  MathSciNet  Google Scholar 

  10. Dennis, J., Torczon, V.: Managing approximation models in optimization. In: Alexandrov, N., Hussaini, M.-Y. (eds.) Multidisciplinary Design Optimization: State of the Art, pp. 330–347. SIAM, Philadelphia (1997)

    Google Scholar 

  11. Doob, J.: Stochastic process measurability conditions. Ann. Inst. Fourier Grenoble 25(3–4), 163–176 (1975)

    MATH  MathSciNet  Google Scholar 

  12. Droste, S., Jansen, T., Wegener, I.: Perhaps not a free lunch but at least a free appetizer. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 99), pp. 833–839, San Francisco, CA. Morgan Kaufmann, San Mateo (1999)

    Google Scholar 

  13. Droste, S., Jansen, T., Wegener, I.: Optimization with randomized search heuristics—The (A)NFL theorem, realistic scenarios, and difficult functions. Theor. Comput. Sci. 287(1), 131–144 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  14. Emmerich, M., Giotis, A., Özdenir, M., Bäck, T., Giannakoglou, K.: Metamodel-assisted evolution strategies. In: Parallel Problem Solving from Nature (PPSN 2002). Lecture Notes in Computer Science, vol. 2439, pp. 371–380. Springer, Berlin (2002)

    Google Scholar 

  15. Gelly, S., Hoock, J.B., Rimmel, A., Teytaud, O., Kalemkarian, Y.: The parallelization of Monte-Carlo planning. In: Proceedings of the International Conference on Informatics in Control, Automation and Robotics (ICINCO 2008), pp. 198–203 (2008, to appear)

  16. Gelly, S., Ruette, S., Teytaud, O.: Comparison-based algorithms are robust and randomized algorithms are anytime. Evol. Comput. J. 15(4), 411–434 (2007)

    Article  Google Scholar 

  17. Gelly, S., Silver, D.: Combining online and offline knowledge in UCT. In: Proceedings of the 24th International Conference on Machine Learning (ICML 2007), pp. 273–280. Assoc. Comput. Mach., New York (2007)

    Chapter  Google Scholar 

  18. Gelly, S., Teytaud, O., Gagné, C.: Resource-aware parameterizations of EDA. In: Proceedings of the 2006 IEEE Congress on Evolutionary Computation (CEC 2006), pp. 2506–2512. IEEE Press, New York (2006)

    Google Scholar 

  19. Geman, D., Jedynak, B.: An active testing model for tracking roads in satellite images. IEEE Trans. Pattern Anal. Mach. Intell. 18(1), 1–14 (1996)

    Article  Google Scholar 

  20. Grieken, M.V.: Optimisation pour l’apprentissage et apprentissage pour l’optimisation. Ph.D. Thesis, Université Paul Sabatier, Toulouse (2004)

  21. Hansen, N., Ostermeier, A.: Completely derandomized self-adaptation in evolution strategies. Evol. Comput. 9(2), 159–195 (2001)

    Article  Google Scholar 

  22. Hopkins, D., Lavelle, T., Patnaik, S.: Neural network and regression methods demonstrated in the design optimization of a subsonic aircraft. Technical Report, NASA Glen Research Center, Research & Technology (2002)

  23. Husken, M., Jin, Y., Sendhoff, B.: Structure optimization of neural networks for evolutionary design optimization. Soft Comput. 9(1), 21–28 (2005)

    Article  Google Scholar 

  24. Ibric, S., Jovanovic, M., Djuric, Z., Parojcic, J., Petrovic, S., Solomun, L., Stupar, B.: Artificial neural networks in the modeling and optimization of aspirin extended release tablets with Eudragit L100 as matrix substance. AAPS PharmSciTech 4(1), 62–70 (2003)

    Article  Google Scholar 

  25. Igel, C., Toussaint, M.: A No-Free-Lunch theorem for non-uniform distributions of target functions. J. Math. Model. Algorithms 3(4), 313–322 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  26. Jones, D.R., Schonlau, M., Welch, W.J.: Efficient global optimization of expensive black-box functions. J. Glob. Optim. 13(4), 455–492 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  27. Keane, A., Nair, P.: Computational Approaches for Aerospace Design: The Pursuit of Excellence. Wiley, New York (2005)

    Book  Google Scholar 

  28. Kern, S., Hansen, N., Koumoutsakos, P.: Local meta-models for optimization using evolution strategies. In: Proceedings of the Parallel Problems Solving from Nature Conference (PPSN 2006), pp. 939–948 (2006)

  29. Kleijnen, J.: Sensitivity analysis of simulation experiments: regression analysis and statistical design. Math. Comput. Simul. 34(3–4), 297–315 (1992)

    Article  Google Scholar 

  30. Kocsis, L., Szepesvari, C.: Bandit-based Monte-Carlo planning. In: European Conference on Machine Learning (ECML 2006). Lecture Notes in Computer Science, vol. 4212, pp. 282–293. Springer, Berlin (2006)

    Google Scholar 

  31. Kolmogorov, A.: Foundations of the Theory of Probability (Original: Grundbegriffe der Wahrscheinlichkeitsrechnung). Chelsea, New York (1956) (1933 original)

    Google Scholar 

  32. Leary, S.J., Bhaskar, A., Keane, A.J.: A derivative based surrogate model for approximating and optimizing the output of an expensive computer simulation. J. Glob. Optim. 30(1), 39–58 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  33. Poupart, P., Vlassis, N., Hoey, J., Regan, K.: An analytic solution to discrete bayesian reinforcement learning. In: Proceedings of the International Conference on Machine Learning (ICML 2006), pp. 697–704. Assoc. Comput. Mach., New York (2006)

    Chapter  Google Scholar 

  34. Powell, M.J.D.: Unconstrained minimization algorithms without computation of derivatives. Boll. Unione Mat. Ital. 9, 60–69 (1974)

    MathSciNet  Google Scholar 

  35. Ames Research Center. Aerodynamic design using neural networks, the amount of computation needed to optimize a design is reduced. Technical Report, NASA Tech Briefs Online (2003)

  36. Radcliffe, N., Surry, P.: Fundamental limitations on search algorithms: Evolutionary computing in perspective. In: van Leeuwen, J. (eds.) Computer Science Today. Lecture Notes in Computer Science, pp. 275–291. Springer, Berlin (1995)

    Chapter  Google Scholar 

  37. Runarsson, T.-P.: Ordinal regression in evolutionary computation. In: Proceedings of the Parallel Problems Solving from Nature Conference (PPSN 2006), pp. 1048–1057 (2006)

  38. Schumacher, C., Vose, M.D., Whitley, L.D.: The No Free Lunch and Problem Description Length. In: Genetic and Evolutionary Computation Conference (GECCO 2001), pp. 565–570 (2001)

  39. VanMarcke, E.: Random Fields: Analysis and Synthesis. MIT Press, Cambridge (1998)

    Google Scholar 

  40. Villemonteix, J., Vazquez, E., Walter, E.: An informational approach to the global optimization of expensive-to-evaluate functions. J. Glob. Optim. (to appear). Online First

  41. Wang, Y., Gelly, S.: Modifications of UCT and sequence-like simulations for Monte-Carlo Go. In: IEEE Symposium on Computational Intelligence and Games, Honolulu, Hawaii, pp. 175–182 (2007)

  42. Wolpert, D., Macready, W.: No Free Lunch Theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anne Auger.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Auger, A., Teytaud, O. Continuous Lunches Are Free Plus the Design of Optimal Optimization Algorithms. Algorithmica 57, 121–146 (2010). https://doi.org/10.1007/s00453-008-9244-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00453-008-9244-5

Keywords

Navigation