Universal Prediction of Selected Bits

  • Tor Lattimore
  • Marcus Hutter
  • Vaibhav Gavane
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6925)

Abstract

Many learning tasks can be viewed as sequence prediction problems. For example, online classification can be converted to sequence prediction with the sequence being pairs of input/target data and where the goal is to correctly predict the target data given input data and previous input/target pairs. Solomonoff induction is known to solve the general sequence prediction problem, but only if the entire sequence is sampled from a computable distribution. In the case of classification and discriminative learning though, only the targets need be structured (given the inputs). We show that the normalised version of Solomonoff induction can still be used in this case, and more generally that it can detect any recursive sub-pattern (regularity) within an otherwise completely unstructured sequence. It is also shown that the unnormalised version can fail to predict very simple recursive sub-patterns.

Keywords

Sequence prediction Solomonoff induction online classification discriminative learning algorithmic information theory 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Gács, P.: On the relation between descriptional complexity and algorithmic probability. Theoretical Computer Science 22(1-2), 71–93 (1983)MathSciNetMATHCrossRefGoogle Scholar
  2. 2.
    Gács, P.: Expanded and improved proof of the relation between description complexity and algorithmic probability (2008) (unpublished)Google Scholar
  3. 3.
    Hutter, M.: Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability. Springer, Berlin (2004)Google Scholar
  4. 4.
    Hutter, M.: On universal prediction and Bayesian confirmation. Theoretical Computer Science 384(1), 33–48 (2007)MathSciNetMATHCrossRefGoogle Scholar
  5. 5.
    Hutter, M.: Open problems in universal induction & intelligence. Algorithms 3(2), 879–906 (2009)CrossRefGoogle Scholar
  6. 6.
    Hutter, M., Muchnik, A.A.: On semimeasures predicting Martin-Löf random sequences. Theoretical Computer Science 382(3), 247–261 (2007)MathSciNetMATHCrossRefGoogle Scholar
  7. 7.
    Lempp, S., Miller, J., Ng, S., Turetsky, D.: Complexity inequality. Unpublished, private communication (2010)Google Scholar
  8. 8.
    Li, M., Vitanyi, P.: An Introduction to Kolmogorov Complexity and Its Applications, 3rd edn. Springer, Heidelberg (2008)MATHCrossRefGoogle Scholar
  9. 9.
    Long, P., Servedio, R.: Discriminative Learning Can Succeed Where Generative Learning Fails. In: Lugosi, G., Simon, H.U. (eds.) COLT 2006. LNCS (LNAI), vol. 4005, pp. 319–334. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  10. 10.
    Solomonoff, R.: A formal theory of inductive inference, Part I. Information and Control 7(1), 1–22 (1964)MathSciNetMATHCrossRefGoogle Scholar
  11. 11.
    Solomonoff, R.: A formal theory of inductive inference, Part II. Information and Control 7(2), 224–254 (1964)MathSciNetMATHCrossRefGoogle Scholar
  12. 12.
    Zvonkin, A.K., Levin, L.A.: The complexity of finite objects and the development of the concepts of information and randomness by means of the theory of algorithms. Russian Mathematical Surveys 25(6), 83 (1970)MathSciNetMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Tor Lattimore
    • 1
  • Marcus Hutter
    • 2
  • Vaibhav Gavane
    • 3
  1. 1.Australian National UniversityAustralia
  2. 2.Australian National University and ETH ZürichAustralia
  3. 3.VIT UniversityVelloreIndia

Personalised recommendations