Abstract
According to some advocates, algorithmic information theory (a branch of theoretical computer science) promises to underwrite an ultimate formal theory of comprehensible patterns. The arguments have an intuitive appeal when expressed in terms of well-known computer languages, and can both inspire and explain practical results in machine learning. The theory of Solomonoff induction, which combines algorithmic information theory and Bayesian inference, has been suggested as a solution to the philosophical problem of induction and an idealisation of the scientific method; an extension of it forms part of a proposed mathematical theory of intelligence.
Unfortunately, the philosophical import of algorithmic information theory is undermined by its dependence on an arbitrary choice of language (reference machine). While the choice of reference machine is irrelevant in the infinite limit, I observe that considered over finite sets there are infinitely many reference machines which give arbitrary evaluations of simplicity. I also explain why, regardless of how much data has been observed, infinitely many reference machines will always give every conceivable “best guess” answer to finite questions in Solomonoff induction.
Finally, I argue that algorithmic information theory is philosophically incomplete because it pretends to a “God’s-eye view” and ignores relevant information in the structure of the observer. This issue has been raised before, but given relatively little focus. The question of anthropic bias - how to take the existence of the reasoner into account when reasoning - is still a subject of major disagreement in Bayesian inference, and is likely to be so in algorithmic information theory as well.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bostrom, N.: Anthropic bias: Observation selection effects in science and philosophy. Psychology Press (2002)
Bringsjord, S., Zenzen, M.J.: Superminds: People Harness Hypercomputation, and More. Kluwer Academic Publishers, Norwell (2003)
Brooks, R.A.: Cambrian intelligence: the early history of the new AI. The MIT Press (1999)
Carter, B., McCrea, W.H.: The Anthropic Principle and its Implications for Biological Evolution [and Discussion]. Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences 310(1512), 347–363 (1983), http://dx.doi.org/10.1098/rsta.1983.0096
Chater, N., Vitányi, P.: Simplicity: a unifying principle in cognitive science? Trends in Cognitive Sciences 7(1), 19–22 (2003)
Chrisley, R.: Natural intensions. In: Adaptation and Representation, pp. 3–11 (2007), http://interdisciplines.org/medias/confs/archives/archive_4.pdf
Cilibrasi, R., Vitányi, P.M.: Clustering by compression. IEEE Transactions on Information Theory 51(4), 1523–1545 (2005)
Clark, A.: Being there: Putting brain, body, and world together again. MIT Press (1998)
Friston, K.J., Daunizeau, J., Kiebel, S.J.: Reinforcement learning or active inference? PloS One 4(7), e6421 (2009)
Hutter, M.: Universal algorithmic intelligence: A mathematical top→down approach. In: Goertzel, B., Pennachin, C. (eds.) Artificial General Intelligence. Cognitive Technologies, pp. 227–290. Springer, Berlin (2007)
Hutter, M.: Open problems in universal induction & intelligence. Algorithms 2(3), 879–906 (2009)
Jaynes, E.T.: Probability theory: the logic of science. Cambridge University Press (2003)
Kelly, K.T.: Justification as truth-finding efficiency: how ockham’s razor works. Minds and Machines 14(4), 485–505 (2004)
Li, M., Vitanyi, P.M.: An Introduction to Kolmogorov Complexity and Its Applications, 3rd edn. Springer Publishing Company, Incorporated (2008)
McGregor, S.: Algorithmic Information Theory and Novelty Generation. In: Proceedings of the 4th Internation Joint Workshop on Computational Creativity, pp. 109–112 (2007)
Müller, M.: Stationary algorithmic probability. Theoretical Computer Science 411(1), 113–130 (2010)
Penrose, R.: The Emperor’s New Mind. Oxford University Press (1989)
Rathmanner, S., Hutter, M.: A philosophical treatise of universal induction. Entropy 13(6), 1076–1136 (2011)
Schmidhuber, J.: Algorithmic theories of everything. Tech. Rep. IDSIA-20-00, quant-ph/0011122, IDSIA, Manno (Lugano), Switzerland (2000)
Schmidhuber, J.: Discovering neural nets with low kolmogorov complexity and high generalization capability. Neural Networks 10(5), 857–873 (1997)
Schmidhuber, J.: The speed prior: a new simplicity measure yielding near-optimal computable predictions. In: Computational Learning Theory, pp. 216–228. Springer (2002)
Searle, J.R., et al.: Minds, brains, and programs. Behavioral and Brain Sciences 3(3), 417–457 (1980)
Sterkenburg, T.F.: The Foundations of Solomonoff Prediction. Master’s thesis, University of Utrecht (2013)
Vallinder, A.: Solomonoff Induction: A Solution to the Problem of the Priors? Master’s thesis, Lund University (2012)
Wolpert, D.H.: Physical limits of inference. Physica D: Nonlinear Phenomena 237(9), 1257–1281 (2008)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
McGregor, S. (2014). Natural Descriptions and Anthropic Bias: Extant Problems In Solomonoff Induction. In: Beckmann, A., Csuhaj-Varjú, E., Meer, K. (eds) Language, Life, Limits. CiE 2014. Lecture Notes in Computer Science, vol 8493. Springer, Cham. https://doi.org/10.1007/978-3-319-08019-2_30
Download citation
DOI: https://doi.org/10.1007/978-3-319-08019-2_30
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-08018-5
Online ISBN: 978-3-319-08019-2
eBook Packages: Computer ScienceComputer Science (R0)