Advertisement

Kolmogorov Complexity in Perspective Part I: Information Theory and Randomness

  • Marie Ferbus-Zanda
  • Serge GrigorieffEmail author
Chapter
Part of the Logic, Epistemology, and the Unity of Science book series (LEUS, volume 34)

Abstract

We survey diverse approaches to the notion of information: from Shannon entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov complexity are presented: randomness and classification. The survey is divided in two parts in the same volume. Part I is dedicated to information theory and the mathematical formalization of randomness based on Kolmogorov complexity. This last application goes back to the 1960s and 1970s with the work of Martin-Löf, Schnorr, Chaitin, Levin, and has gained new impetus in the last years.

Keywords

Kolmogorov Complexity Chaitin Partial Computable Functionals Prefix-free Code Binary Word 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Becher, V., Figueira, S., Nies, A., & Picchi, S. (2005). Program size complexity for possibly infinite computations. Notre Dame Journal of Formal Logic, 46(1), 51–64.CrossRefGoogle Scholar
  2. Becher, V., Figueira, S., Grigorieff, S., & Miller, J. (2006). Random reals and halting probabilities. The Journal of Symbolic Logic, 71(4), 1411–1430.CrossRefGoogle Scholar
  3. Bienvenu, L., & Merkle, W. (2007). Reconciling data compression and Kolmogorov complexity. In ICALP 2007, Wroclaw (LNCS, Vol. 4596, pp. 643–654).Google Scholar
  4. Bienvenu, L., Merkle, W., & Shen, A. (2008). A simple proof of Miller-Yu theorem. Fundamenta Informaticae, 83(1–2), 21–24.Google Scholar
  5. Bonfante, G., Kaczmarek, M., & Marion, J.-Y. (2006). On abstract computer virology: From a recursion-theoretic perspective. Journal in Computer Virology, 1(3–4), 45–54.CrossRefGoogle Scholar
  6. Calude, C., & Jürgensen, H. (2005). Is complexity a source of incompleteness? Advances in Applied Mathematics, 35, 1–15.CrossRefGoogle Scholar
  7. Chaitin, G. (1966). On the length of programs for computing finite binary sequences. Journal of the ACM, 13, 547–569.CrossRefGoogle Scholar
  8. Chaitin, G. (1969). On the length of programs for computing finite binary sequences: Statistical considerations. Journal of the ACM, 16, 145–159.CrossRefGoogle Scholar
  9. Chaitin, G. (1971). Computational complexity & Gödel incompleteness theorem. ACM SIGACT News, 9, 11–12.CrossRefGoogle Scholar
  10. Chaitin, G. (1974). Information theoretic limitations of formal systems. Journal of the ACM, 21, 403–424.CrossRefGoogle Scholar
  11. Chaitin, G. (1975). A theory of program size formally identical to information theory. Journal of the ACM, 22, 329–340.CrossRefGoogle Scholar
  12. Delahaye, J. P. (1999). Information, complexité, hasard (2nd ed.). Paris: Hermès.Google Scholar
  13. Delahaye, J. P. (2006). Complexités: Aux limites des mathématiques et de l’informatique. Montreal: Belin-Pour la Science.Google Scholar
  14. Durand, B., & Zvonkin, A. (2004). Complexité de Kolmogorov. In E. Charpentier, A. Lesne, & N. Nikolski (Eds.), L’héritage de Kolmogorov en mathématiques (pp. 269–287). Berlin: Springer.Google Scholar
  15. Fallis, D. (1996). The source of Chaitin’s incorrectness. Philosophia Mathematica, 4, 261–269.CrossRefGoogle Scholar
  16. Feller, W. (1968). Introduction to probability theory and its applications (Vol. 1, 3rd ed.). New York: Wiley.Google Scholar
  17. Ferbus-Zanda, M., & Grigorieff, S. (2004). Is randomnes native to computer science? In G. Paun, G. Rozenberg, & A. Salomaa (Eds.), Current trends in theoretical computer science (pp. 141–179). Singapore: World Scientific.CrossRefGoogle Scholar
  18. Ferbus-Zanda, M., & Grigorieff, S. (2006). Kolmogorov complexity and set theoretical representations of integers. Mathematical Logic Quarterly, 52(4), 381–409.CrossRefGoogle Scholar
  19. Gács, P. (1974). On the symmetry of algorithmic information. Soviet Mathematics Doklady, 15, 1477–1480.Google Scholar
  20. Gács, P. (1980). Exact expressions for some randomness tests. Zeitschrift für Mathematische Logik und Grundlagen der Mathematik, 26, 385–394.CrossRefGoogle Scholar
  21. Gács, P. (1993). Lectures notes on descriptional complexity and randomness (pp. 1–67). Boston: Boston University. http://cs-pub.bu.edu/faculty/gacs/Home.html.
  22. Huffman, D. A. (1952). A method for construction of minimum-redundancy codes. Proceedings IRE, 40, 1098–1101.Google Scholar
  23. Knuth, D. (1981). The art of computer programming. Volume 2: Semi-numerical algorithms (2nd ed.). Reading: Addison-Wesley.Google Scholar
  24. Kolmogorov, A. N. (1933). Grundbegriffe der Wahscheinlichkeitsrechnung. Berlin: Springer [Foundations of the theory of probability, Chelsea, 1956].Google Scholar
  25. Kolmogorov, A. N. (1963). On tables of random numbers. Sankhya, The Indian Journal of Statistics, Series A, 25, 369–376.Google Scholar
  26. Kolmogorov, A. N. (1965). Three approaches to the quantitative definition of information. Problems of Information Transmission, 1(1), 1–7.Google Scholar
  27. Kolmogorov, A. N. (1983). Combinatorial foundation of information theory and the calculus of probability. Russian Mathematical Surveys, 38(4), 29–40.CrossRefGoogle Scholar
  28. Lacombe, D. (1960). La théorie des fonctions récursives et ses applications. Bulletin de la Société Mathématique de France, 88, 393–468.Google Scholar
  29. van Lambalgen, M. (1989). Algorithmic information theory. The Journal of Symbolic Logic, 54(4), 1389–1400.CrossRefGoogle Scholar
  30. Levin, L. (1973). On the notion of a random sequence. Soviet Mathematics Doklady, 14, 1413–1416.Google Scholar
  31. Levin, L. (1974). Laws of information conservation (non-growth) and aspects of the foundation of probability theory. Problems of Information Transmission, 10(3), 206–210.Google Scholar
  32. Li, M., & Vitányi, P. (1997). An introduction to Kolmogorov complexity and its applications (2nd ed.). New York: Springer.CrossRefGoogle Scholar
  33. Martin-Löf, P. (1966). The definition of random sequences. Information and Control, 9, 602–619.CrossRefGoogle Scholar
  34. Martin-Löf, P. (1971). Complexity of oscilations in infinite binary sequences. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 19, 225–230.CrossRefGoogle Scholar
  35. Miller, J., & Yu, L. (2008). On initial segment complexity and degrees of randomness. Transactions of the American Mathematical Society, 360, 3193–3210.CrossRefGoogle Scholar
  36. Von Mises, R. (1919). Grundlagen der wahrscheinlichkeitsrechnung. Mathematische Zeitschrift, 5, 52–99.CrossRefGoogle Scholar
  37. Von Mises, R. (1939). Probability, statistics and truth. New York: Macmillan. (Reprinted: Dover 1981)Google Scholar
  38. Moschovakis, J. R. (1993). An intuitionistic theory of lawlike, choice and lawless sequences. In J. Oikkonen & J. Väänänen (Eds.), Logic colloquium ’90, Helsinski (Lecture notes in logic, Vol. 2, pp. 191–209). Berlin: Springer.Google Scholar
  39. Moschovakis, J. R. (1994). More about relatively lawless sequences. The Journal of Symbolic Logic, 59(3), 813–829.CrossRefGoogle Scholar
  40. Moschovakis, J. R. (1996). A classical view of the intuitionistic continuum. Annals of Pure and Applied Logic, 81, 9–24.CrossRefGoogle Scholar
  41. Von Neumann, J. (1951). Various techniques used in connection with random digits. In A. S. Householder, G. E. Forsythe, & H. H. Germond (Eds.), Monte Carlo method (National bureau of standards applied mathematics series, Vol. 12, pp. 36–38). Washington, D.C.: U.S. Government Printing Office.Google Scholar
  42. Nies, A., Stephan, F., & Terwijn, S. A. (2005). Randomness, relativization and Turing degrees. The Journal of Symbolic Logic, 70(2), 515–535.CrossRefGoogle Scholar
  43. Paul, W. (1979). Kolmogorov’s complexity and lower bounds. In L. Budach (Ed.), Proceedings of 2nd international conference on fundamentals of computation theory, Berlin/Wendisch-Rietz (pp. 325–334). Berlin: Akademie.Google Scholar
  44. Raatikainen, P. (1998). On interpreting Chaitin’s incompleteness theorem. Journal of Philosophical Logic, 27(6), 569–586.CrossRefGoogle Scholar
  45. Russell, B. (1908). Mathematical logic as based on the theory of types. American Journal of Mathematics, 30, 222–262. (Reprinted in From Frege to Gödel: A source book in mathematical logic, 1879–1931, pp. 150–182, J. van Heijenoort, Ed., 1967)Google Scholar
  46. Schnorr, P. (1971a). A unified approach to the definition of random sequences. Mathematical Systems Theory, 5, 246–258.CrossRefGoogle Scholar
  47. Schnorr, P. (1971b). Zufälligkeit und Wahrscheinlichkeit (Lecture notes in mathematics, Vol. 218). Berlin/New York: Springer.Google Scholar
  48. Schnorr, P. (1973). A process complexity and effective random tests. Journal of Computer and System Sciences, 7, 376–388.CrossRefGoogle Scholar
  49. Shannon, C. E. (1948). The mathematical theory of communication. Bell System Technical Journal, 27, 379–423.CrossRefGoogle Scholar
  50. Soare, R. (1996). Computability and recursion. Bulletin of Symbolic Logic, 2, 284–321.CrossRefGoogle Scholar
  51. Solomonov, R. (1964a). A formal theory of inductive inference, part I. Information and Control, 7, 1–22.CrossRefGoogle Scholar
  52. Solomonov, R. (1964b). A formal theory of inductive inference, part II. Information and Control, 7, 224–254.CrossRefGoogle Scholar
  53. Zvonkin, A., & Levin, L. (1970). The complexity of finite objects and the development of the concepts of information and randomness by means of the theory of algorithms. Russian Mathematical Surveys, 6, 83–124.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  1. 1.LIAFA, CNRS & Université Paris Diderot – Paris 7Paris Cedex 13France

Personalised recommendations