Information Perspective of Optimization

  • Yossi Borenstein
  • Riccardo Poli
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4193)


In this paper we relate information theory and Kolmogorov Complexity (KC) to optimization in the black box scenario. We define the set of all possible decisions an algorithm might make during a run, we associate a function with a probability distribution over this set and define accordingly its entropy. We show that the expected KC of the set (rather than the function) is a better measure of problem difficulty. We analyze the effect of the entropy on the expected KC. Finally, we show, for a restricted scenario, that any permutation closure of a single function, the finest level of granularity for which a No Free Lunch Theorem can hold [7], can be associated with a particular value of entropy. This implies bounds on the expected performance of an algorithm on members of that closure.


Search Space Random Search Kolmogorov Complex Free Lunch Random Decision 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Borenstein, Y., Poli, R.: Kolmogorov complexity, optimization and hardness. In: CEC 2006 (2006)Google Scholar
  2. 2.
    Borenstein, Y., Poli, R.: No free lunch, Kolmogorov Complexity and the information landscape. In: Proceedings of IEEE CEC 2005, vol. 3 (2005)Google Scholar
  3. 3.
    Droste, S., Jansen, T., Wegener, I.: Upper and lower bounds for randomized search heuristics in black-box optimization. In: ECCC (048) (2003)Google Scholar
  4. 4.
    English, T.M.: On the structure of sequential search: Beyond ”no free lunch”. In: Gottlieb, J., Raidl, G.R. (eds.) EvoCOP 2004. LNCS, vol. 3004, pp. 95–103. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  5. 5.
    Grunwald, P., Vitanyi, P.: Shannon information and kolmogorov complexity. IEEE Transactions on Information Theory (2004) (in Review)Google Scholar
  6. 6.
    Grunwald, P., Vitanyi, P.: Algorithmic information theory. In: Handbook on the Philosophy of Information, Elsevier, Amsterdam (to appear)Google Scholar
  7. 7.
    Schumacher, C., Vose, M.D., Whitley, L.D.: The no free lunch and problem description length. In: Spector, L., et al. (eds.) GECCO 2001, pp. 565–570. Morgan Kaufmann, San Francisco (2001)Google Scholar
  8. 8.
    Vose, M.D.: The Simple Genetic Algorithm: Foundations and Theory. MIT Press, Cambridge (1998)Google Scholar
  9. 9.
    Wegener, I.: Towards a theory of randomized search heuristics. In: Rovan, B., Vojtáš, P. (eds.) MFCS 2003. LNCS, vol. 2747, pp. 125–141. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  10. 10.
    Wolpert, D., Macready, W.G.: No free lunch theorems for optimization. IEEE Trans. Evolutionary Computation 1(1), 67–82 (1997)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Yossi Borenstein
    • 1
  • Riccardo Poli
    • 1
  1. 1.University of EssexColchesterUK

Personalised recommendations