How Many Strings Are Easy to Predict?

  • Yuri Kalnishkan
  • Volodya Vovk
  • Michael V. Vyugin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2777)


It is well known in the theory of Kolmogorov complexity that most strings cannot be compressed; more precisely, only exponentially few (Θ(2 n − m )) strings of length n can be compressed by m bits. This paper extends the ‘incompressibility’ property of Kolmogorov complexity to the ‘unpredictability’ property of predictive complexity. The ‘unpredictability’ property states that predictive complexity (defined as the loss suffered by a universal prediction algorithm working infinitely long) of most strings is close to a trivial upper bound (the loss suffered by a trivial minimax constant prediction strategy). We show that only exponentially few strings can be successfully predicted and find the base of the exponent.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. CBFH+97.
    Cesa-Bianchi, N., Freund, Y., Haussler, D., Helmbold, D.P., Schapire, R.E., Warmuth, M.K.: How to use expert advice. Journal of the ACM 44(3), 427–485 (1997)MATHCrossRefMathSciNetGoogle Scholar
  2. Egg58.
    Eggleston, H.G.: Convexity. Cambridge University Press, Cambridge (1958)CrossRefGoogle Scholar
  3. Gal68.
    Gallager, R.G.: Information Theory and Reliable Communication. John Wiley and Sons, Inc., Chichester (1968)Google Scholar
  4. HKW98.
    Haussler, D., Kivinen, J., Warmuth, M.K.: Sequential prediction of individual sequences under general loss functions. IEEE Transactions on Information Theory 44(5), 1906–1925 (1998)MATHCrossRefMathSciNetGoogle Scholar
  5. KT75.
    Karlin, S., Taylor, H.M.: A First Course in Stochastic Processes. Academic Press, Inc., London (1975)Google Scholar
  6. LV97.
    Li, M., Vitányi, P.: An Introduction to Kolmogorov Complexity and Its Applications, 2nd edn. Springer, New York (1997)MATHGoogle Scholar
  7. LW94.
    Littlestone, N., Warmuth, M.K.: The weighted majority algorithm. Information and Computation 108, 212–261 (1994)MATHCrossRefMathSciNetGoogle Scholar
  8. VW98.
    Vovk, V., Watkins, C.J.H.C.: Universal portfolio selection. In: Proceedings of the 11th Annual Conference on Computational Learning Theory, pp. 12–23 (1998)Google Scholar
  9. V’y94.
    V’yugin, V.V.: Algorithmic entropy (complexity) of finite objects and its applications to defining randomness and amount of information. Selecta Mathematica formerly Sovietica 13, 357–389 (1994)MathSciNetGoogle Scholar
  10. Wil91.
    Williams, D.: Probability with Martingales. Cambridge University Press, Cambridge (1991)MATHGoogle Scholar
  11. ZL70.
    Zvonkin, A.K., Levin, L.A.: The complexity of finite objects and the development of the concepts of information and randomness by means of the theory of algorithms. Russian Math. Surveys 25, 83–124 (1970)MATHCrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Yuri Kalnishkan
    • 1
  • Volodya Vovk
    • 1
  • Michael V. Vyugin
    • 1
  1. 1.Department of Computer Science, Royal HollowayUniversity of LondonEgham, SurreyUnited Kingdom

Personalised recommendations