Algorithmica

, Volume 57, Issue 1, pp 121–146

Continuous Lunches Are Free Plus the Design of Optimal Optimization Algorithms

Article

Abstract

This paper analyses extensions of No-Free-Lunch (NFL) theorems to countably infinite and uncountable infinite domains and investigates the design of optimal optimization algorithms.

The original NFL theorem due to Wolpert and Macready states that, for finite search domains, all search heuristics have the same performance when averaged over the uniform distribution over all possible functions. For infinite domains the extension of the concept of distribution over all possible functions involves measurability issues and stochastic process theory. For countably infinite domains, we prove that the natural extension of NFL theorems, for the current formalization of probability, does not hold, but that a weaker form of NFL does hold, by stating the existence of non-trivial distributions of fitness leading to equal performances for all search heuristics. Our main result is that for continuous domains, NFL does not hold. This free-lunch theorem is based on the formalization of the concept of random fitness functions by means of random fields.

We also consider the design of optimal optimization algorithms for a given random field, in a black-box setting, namely, a complexity measure based solely on the number of requests to the fitness function. We derive an optimal algorithm based on Bellman’s decomposition principle, for a given number of iterates and a given distribution of fitness functions. We also approximate this algorithm thanks to a Monte-Carlo planning algorithm close to the UCT (Upper Confidence Trees) algorithm, and provide experimental results.

Keywords

No-Free-Lunch Kolmogorov’s extension theorem Expensive optimization Dynamic programming Complexity Bandit-based Monte-Carlo planning 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Auger, A., Schoenauer, M., Teytaud, O.: Local and global order 3/2 convergence of a surrogate evolutionary algorithm. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2005), pp. 857–864. Assoc. Comput. Mach., New York (2005) CrossRefGoogle Scholar
  2. 2.
    Bellman, R.: Dynamic Programming. Princeton University Press, Princeton (1957) Google Scholar
  3. 3.
    Bertsekas, D.: Dynamic Programming and Optimal Control. Athena Scientific, Belmont (1995) MATHGoogle Scholar
  4. 4.
    Billingsley, P.: Probability and Measure. Wiley, New York (1986) MATHGoogle Scholar
  5. 5.
    Booker, A., Dennis, Jr., J.E., Frank, P., Serafini, D., Torczon, V., Trosset, M.: A rigorous framework for optimization of expensive functions by surrogates. Struct. Optim. 17(1), 1–13 (1999) CrossRefGoogle Scholar
  6. 6.
    Conn, A., Scheinberg, K., Toint, L.: Recent progress in unconstrained nonlinear optimization without derivatives. Math. Program. 79, 397–414 (1997) MathSciNetGoogle Scholar
  7. 7.
    Corne, D.W., Knowles, J.D.: Some multiobjective optimizers are better than others. In: Proceedings of the 2003 IEEE Congress on Evolutionary Computation (CEC 2003), pp. 2506–2512. IEEE Press, New York (2003) CrossRefGoogle Scholar
  8. 8.
    Culberson, J.C.: On the futility of blind search: an algorithmic view of “No Free Lunch”. Evol. Comput. 6(2), 109–127 (1998) CrossRefGoogle Scholar
  9. 9.
    Daniell, P.J.: Integrals in an infinite number of dimensions. Ann. Math. 20, 281–288 (1919) CrossRefMathSciNetGoogle Scholar
  10. 10.
    Dennis, J., Torczon, V.: Managing approximation models in optimization. In: Alexandrov, N., Hussaini, M.-Y. (eds.) Multidisciplinary Design Optimization: State of the Art, pp. 330–347. SIAM, Philadelphia (1997) Google Scholar
  11. 11.
    Doob, J.: Stochastic process measurability conditions. Ann. Inst. Fourier Grenoble 25(3–4), 163–176 (1975) MATHMathSciNetGoogle Scholar
  12. 12.
    Droste, S., Jansen, T., Wegener, I.: Perhaps not a free lunch but at least a free appetizer. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 99), pp. 833–839, San Francisco, CA. Morgan Kaufmann, San Mateo (1999) Google Scholar
  13. 13.
    Droste, S., Jansen, T., Wegener, I.: Optimization with randomized search heuristics—The (A)NFL theorem, realistic scenarios, and difficult functions. Theor. Comput. Sci. 287(1), 131–144 (2002) MATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    Emmerich, M., Giotis, A., Özdenir, M., Bäck, T., Giannakoglou, K.: Metamodel-assisted evolution strategies. In: Parallel Problem Solving from Nature (PPSN 2002). Lecture Notes in Computer Science, vol. 2439, pp. 371–380. Springer, Berlin (2002) Google Scholar
  15. 15.
    Gelly, S., Hoock, J.B., Rimmel, A., Teytaud, O., Kalemkarian, Y.: The parallelization of Monte-Carlo planning. In: Proceedings of the International Conference on Informatics in Control, Automation and Robotics (ICINCO 2008), pp. 198–203 (2008, to appear) Google Scholar
  16. 16.
    Gelly, S., Ruette, S., Teytaud, O.: Comparison-based algorithms are robust and randomized algorithms are anytime. Evol. Comput. J. 15(4), 411–434 (2007) CrossRefGoogle Scholar
  17. 17.
    Gelly, S., Silver, D.: Combining online and offline knowledge in UCT. In: Proceedings of the 24th International Conference on Machine Learning (ICML 2007), pp. 273–280. Assoc. Comput. Mach., New York (2007) CrossRefGoogle Scholar
  18. 18.
    Gelly, S., Teytaud, O., Gagné, C.: Resource-aware parameterizations of EDA. In: Proceedings of the 2006 IEEE Congress on Evolutionary Computation (CEC 2006), pp. 2506–2512. IEEE Press, New York (2006) Google Scholar
  19. 19.
    Geman, D., Jedynak, B.: An active testing model for tracking roads in satellite images. IEEE Trans. Pattern Anal. Mach. Intell. 18(1), 1–14 (1996) CrossRefGoogle Scholar
  20. 20.
    Grieken, M.V.: Optimisation pour l’apprentissage et apprentissage pour l’optimisation. Ph.D. Thesis, Université Paul Sabatier, Toulouse (2004) Google Scholar
  21. 21.
    Hansen, N., Ostermeier, A.: Completely derandomized self-adaptation in evolution strategies. Evol. Comput. 9(2), 159–195 (2001) CrossRefGoogle Scholar
  22. 22.
    Hopkins, D., Lavelle, T., Patnaik, S.: Neural network and regression methods demonstrated in the design optimization of a subsonic aircraft. Technical Report, NASA Glen Research Center, Research & Technology (2002) Google Scholar
  23. 23.
    Husken, M., Jin, Y., Sendhoff, B.: Structure optimization of neural networks for evolutionary design optimization. Soft Comput. 9(1), 21–28 (2005) CrossRefGoogle Scholar
  24. 24.
    Ibric, S., Jovanovic, M., Djuric, Z., Parojcic, J., Petrovic, S., Solomun, L., Stupar, B.: Artificial neural networks in the modeling and optimization of aspirin extended release tablets with Eudragit L100 as matrix substance. AAPS PharmSciTech 4(1), 62–70 (2003) CrossRefGoogle Scholar
  25. 25.
    Igel, C., Toussaint, M.: A No-Free-Lunch theorem for non-uniform distributions of target functions. J. Math. Model. Algorithms 3(4), 313–322 (2004) MATHCrossRefMathSciNetGoogle Scholar
  26. 26.
    Jones, D.R., Schonlau, M., Welch, W.J.: Efficient global optimization of expensive black-box functions. J. Glob. Optim. 13(4), 455–492 (1998) MATHCrossRefMathSciNetGoogle Scholar
  27. 27.
    Keane, A., Nair, P.: Computational Approaches for Aerospace Design: The Pursuit of Excellence. Wiley, New York (2005) CrossRefGoogle Scholar
  28. 28.
    Kern, S., Hansen, N., Koumoutsakos, P.: Local meta-models for optimization using evolution strategies. In: Proceedings of the Parallel Problems Solving from Nature Conference (PPSN 2006), pp. 939–948 (2006) Google Scholar
  29. 29.
    Kleijnen, J.: Sensitivity analysis of simulation experiments: regression analysis and statistical design. Math. Comput. Simul. 34(3–4), 297–315 (1992) CrossRefGoogle Scholar
  30. 30.
    Kocsis, L., Szepesvari, C.: Bandit-based Monte-Carlo planning. In: European Conference on Machine Learning (ECML 2006). Lecture Notes in Computer Science, vol. 4212, pp. 282–293. Springer, Berlin (2006) Google Scholar
  31. 31.
    Kolmogorov, A.: Foundations of the Theory of Probability (Original: Grundbegriffe der Wahrscheinlichkeitsrechnung). Chelsea, New York (1956) (1933 original) Google Scholar
  32. 32.
    Leary, S.J., Bhaskar, A., Keane, A.J.: A derivative based surrogate model for approximating and optimizing the output of an expensive computer simulation. J. Glob. Optim. 30(1), 39–58 (2004) MATHCrossRefMathSciNetGoogle Scholar
  33. 33.
    Poupart, P., Vlassis, N., Hoey, J., Regan, K.: An analytic solution to discrete bayesian reinforcement learning. In: Proceedings of the International Conference on Machine Learning (ICML 2006), pp. 697–704. Assoc. Comput. Mach., New York (2006) CrossRefGoogle Scholar
  34. 34.
    Powell, M.J.D.: Unconstrained minimization algorithms without computation of derivatives. Boll. Unione Mat. Ital. 9, 60–69 (1974) MathSciNetGoogle Scholar
  35. 35.
    Ames Research Center. Aerodynamic design using neural networks, the amount of computation needed to optimize a design is reduced. Technical Report, NASA Tech Briefs Online (2003) Google Scholar
  36. 36.
    Radcliffe, N., Surry, P.: Fundamental limitations on search algorithms: Evolutionary computing in perspective. In: van Leeuwen, J. (eds.) Computer Science Today. Lecture Notes in Computer Science, pp. 275–291. Springer, Berlin (1995) CrossRefGoogle Scholar
  37. 37.
    Runarsson, T.-P.: Ordinal regression in evolutionary computation. In: Proceedings of the Parallel Problems Solving from Nature Conference (PPSN 2006), pp. 1048–1057 (2006) Google Scholar
  38. 38.
    Schumacher, C., Vose, M.D., Whitley, L.D.: The No Free Lunch and Problem Description Length. In: Genetic and Evolutionary Computation Conference (GECCO 2001), pp. 565–570 (2001) Google Scholar
  39. 39.
    VanMarcke, E.: Random Fields: Analysis and Synthesis. MIT Press, Cambridge (1998) Google Scholar
  40. 40.
    Villemonteix, J., Vazquez, E., Walter, E.: An informational approach to the global optimization of expensive-to-evaluate functions. J. Glob. Optim. (to appear). Online First Google Scholar
  41. 41.
    Wang, Y., Gelly, S.: Modifications of UCT and sequence-like simulations for Monte-Carlo Go. In: IEEE Symposium on Computational Intelligence and Games, Honolulu, Hawaii, pp. 175–182 (2007) Google Scholar
  42. 42.
    Wolpert, D., Macready, W.: No Free Lunch Theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997) CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  1. 1.TAO TeamINRIA Saclay—LRIOrsay CedexFrance

Personalised recommendations