Sequence Prediction Based on Monotone Complexity
This paper studies sequence prediction based on the monotone Kolmogorov complexity Km = − logm, i.e. based on universal deterministic/one-part MDL. m is extremely close to Solomonoff’s prior M, the latter being an excellent predictor in deterministic as well as probabilistic environments, where performance is measured in terms of convergence of posteriors or losses. Despite this closeness to M, it is difficult to assess the prediction quality of m, since little is known about the closeness of their posteriors, which are the important quantities for prediction. We show that for deterministic computable environments, the “posterior” and losses of m converge, but rapid convergence could only be shown on-sequence; the off-sequence behavior is unclear. In probabilistic environments, neither the posterior nor the losses converge, in general.
Unable to display preview. Download preview PDF.
- Hut02.Hutter, M.: Convergence and loss bounds for Bayesian sequence prediction. Technical Report IDSIA-09-01, IDSIA, Manno(Lugano), CH (2002), http://arxiv.org/abs/cs.LG/0301014
- Sch00.Schmidhuber, J.: Algorithmic theories of everything. Report IDSIA-20-00, quant-ph/0011122, IDSIA, Manno (Lugano), Switzerland (2000)Google Scholar
- Sol64.Solomonoff, R.J.: A formal theory of inductive inference: Part 1 and 2. Inform. Control 7, 1–22, 224–254 (1964)Google Scholar
- Sol78 .Solomonoff, R.J.: Complexity-based induction systems: comparisons and convergence theorems. IEEE Trans. Inform. Theory IT - 24, 422–432 (1978)Google Scholar
- VW98 .