Skip to main content

Natural Descriptions and Anthropic Bias: Extant Problems In Solomonoff Induction

  • Conference paper
Language, Life, Limits (CiE 2014)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 8493))

Included in the following conference series:

  • 751 Accesses

Abstract

According to some advocates, algorithmic information theory (a branch of theoretical computer science) promises to underwrite an ultimate formal theory of comprehensible patterns. The arguments have an intuitive appeal when expressed in terms of well-known computer languages, and can both inspire and explain practical results in machine learning. The theory of Solomonoff induction, which combines algorithmic information theory and Bayesian inference, has been suggested as a solution to the philosophical problem of induction and an idealisation of the scientific method; an extension of it forms part of a proposed mathematical theory of intelligence.

Unfortunately, the philosophical import of algorithmic information theory is undermined by its dependence on an arbitrary choice of language (reference machine). While the choice of reference machine is irrelevant in the infinite limit, I observe that considered over finite sets there are infinitely many reference machines which give arbitrary evaluations of simplicity. I also explain why, regardless of how much data has been observed, infinitely many reference machines will always give every conceivable “best guess” answer to finite questions in Solomonoff induction.

Finally, I argue that algorithmic information theory is philosophically incomplete because it pretends to a “God’s-eye view” and ignores relevant information in the structure of the observer. This issue has been raised before, but given relatively little focus. The question of anthropic bias - how to take the existence of the reasoner into account when reasoning - is still a subject of major disagreement in Bayesian inference, and is likely to be so in algorithmic information theory as well.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bostrom, N.: Anthropic bias: Observation selection effects in science and philosophy. Psychology Press (2002)

    Google Scholar 

  2. Bringsjord, S., Zenzen, M.J.: Superminds: People Harness Hypercomputation, and More. Kluwer Academic Publishers, Norwell (2003)

    Book  Google Scholar 

  3. Brooks, R.A.: Cambrian intelligence: the early history of the new AI. The MIT Press (1999)

    Google Scholar 

  4. Carter, B., McCrea, W.H.: The Anthropic Principle and its Implications for Biological Evolution [and Discussion]. Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences 310(1512), 347–363 (1983), http://dx.doi.org/10.1098/rsta.1983.0096

    Article  Google Scholar 

  5. Chater, N., Vitányi, P.: Simplicity: a unifying principle in cognitive science? Trends in Cognitive Sciences 7(1), 19–22 (2003)

    Article  Google Scholar 

  6. Chrisley, R.: Natural intensions. In: Adaptation and Representation, pp. 3–11 (2007), http://interdisciplines.org/medias/confs/archives/archive_4.pdf

  7. Cilibrasi, R., Vitányi, P.M.: Clustering by compression. IEEE Transactions on Information Theory 51(4), 1523–1545 (2005)

    Article  Google Scholar 

  8. Clark, A.: Being there: Putting brain, body, and world together again. MIT Press (1998)

    Google Scholar 

  9. Friston, K.J., Daunizeau, J., Kiebel, S.J.: Reinforcement learning or active inference? PloS One 4(7), e6421 (2009)

    Google Scholar 

  10. Hutter, M.: Universal algorithmic intelligence: A mathematical top→down approach. In: Goertzel, B., Pennachin, C. (eds.) Artificial General Intelligence. Cognitive Technologies, pp. 227–290. Springer, Berlin (2007)

    Chapter  Google Scholar 

  11. Hutter, M.: Open problems in universal induction & intelligence. Algorithms 2(3), 879–906 (2009)

    Article  MathSciNet  Google Scholar 

  12. Jaynes, E.T.: Probability theory: the logic of science. Cambridge University Press (2003)

    Google Scholar 

  13. Kelly, K.T.: Justification as truth-finding efficiency: how ockham’s razor works. Minds and Machines 14(4), 485–505 (2004)

    Article  Google Scholar 

  14. Li, M., Vitanyi, P.M.: An Introduction to Kolmogorov Complexity and Its Applications, 3rd edn. Springer Publishing Company, Incorporated (2008)

    Google Scholar 

  15. McGregor, S.: Algorithmic Information Theory and Novelty Generation. In: Proceedings of the 4th Internation Joint Workshop on Computational Creativity, pp. 109–112 (2007)

    Google Scholar 

  16. Müller, M.: Stationary algorithmic probability. Theoretical Computer Science 411(1), 113–130 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  17. Penrose, R.: The Emperor’s New Mind. Oxford University Press (1989)

    Google Scholar 

  18. Rathmanner, S., Hutter, M.: A philosophical treatise of universal induction. Entropy 13(6), 1076–1136 (2011)

    Article  MathSciNet  Google Scholar 

  19. Schmidhuber, J.: Algorithmic theories of everything. Tech. Rep. IDSIA-20-00, quant-ph/0011122, IDSIA, Manno (Lugano), Switzerland (2000)

    Google Scholar 

  20. Schmidhuber, J.: Discovering neural nets with low kolmogorov complexity and high generalization capability. Neural Networks 10(5), 857–873 (1997)

    Article  Google Scholar 

  21. Schmidhuber, J.: The speed prior: a new simplicity measure yielding near-optimal computable predictions. In: Computational Learning Theory, pp. 216–228. Springer (2002)

    Google Scholar 

  22. Searle, J.R., et al.: Minds, brains, and programs. Behavioral and Brain Sciences 3(3), 417–457 (1980)

    Article  Google Scholar 

  23. Sterkenburg, T.F.: The Foundations of Solomonoff Prediction. Master’s thesis, University of Utrecht (2013)

    Google Scholar 

  24. Vallinder, A.: Solomonoff Induction: A Solution to the Problem of the Priors? Master’s thesis, Lund University (2012)

    Google Scholar 

  25. Wolpert, D.H.: Physical limits of inference. Physica D: Nonlinear Phenomena 237(9), 1257–1281 (2008)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

McGregor, S. (2014). Natural Descriptions and Anthropic Bias: Extant Problems In Solomonoff Induction. In: Beckmann, A., Csuhaj-Varjú, E., Meer, K. (eds) Language, Life, Limits. CiE 2014. Lecture Notes in Computer Science, vol 8493. Springer, Cham. https://doi.org/10.1007/978-3-319-08019-2_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-08019-2_30

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-08018-5

  • Online ISBN: 978-3-319-08019-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics