Advertisement

Compression and Intelligence: Social Environments and Communication

  • David L. Dowe
  • José Hernández-Orallo
  • Paramjit K. Das
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6830)

Abstract

Compression has been advocated as one of the principles which pervades inductive inference and prediction - and, from there, it has also been recurrent in definitions and tests of intelligence. However, this connection is less explicit in new approaches to intelligence. In this paper, we advocate that the notion of compression can appear again in definitions and tests of intelligence through the concepts of ‘mind-reading’ and ‘communication’ in the context of multi-agent systems and social environments. Our main position is that two-part Minimum Message Length (MML) compression is not only more natural and effective for agents with limited resources, but it is also much more appropriate for agents in (co-operative) social environments than one-part compression schemes - particularly those using a posterior-weighted mixture of all available models following Solomonoff’s theory of prediction. We think that the realisation of these differences is important to avoid a naive view of ‘intelligence as compression’ in favour of a better understanding of how, why and where (one-part or two-part, lossless or lossy) compression is needed.

Keywords

two-part compression Minimum Message Length (MML) Solomonoff theory of prediction tests of intelligence communication 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Chaitin, G.J.: Godel’s theorem and information. International Journal of Theoretical Physics 21(12), 941–954 (1982)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Dowe, D.L.: Foreword re C. S. Wallace. Computer Journal 51(5), 523–560 (2008); Christopher Stewart WALLACE (1933-2004) memorial special issueCrossRefGoogle Scholar
  3. 3.
    Dowe, D.L.: Minimum Message Length and statistically consistent invariant (objective?) Bayesian probabilistic inference - from (medical) “evidence”. Social Epistemology 22(4), 433–460 (2008)CrossRefGoogle Scholar
  4. 4.
    Dowe, D.L.: MML, hybrid Bayesian network graphical models, statistical consistency, invariance and uniqueness. In: Bandyopadhyay, P.S., Forster, M.R. (eds.) Handbook of the Philosophy of Science. Philosophy of Statistics, vol. 7, pp. 901–982. Elsevier, Amsterdam (2011)Google Scholar
  5. 5.
    Dowe, D.L., Hajek, A.R.: A computational extension to the Turing Test. Technical Report #97/322, Dept Computer Science, Monash University, Melbourne, Australia, 9 pp (1997)Google Scholar
  6. 6.
    Dowe, D.L., Hajek, A.R.: A non-behavioural, computational extension to the Turing Test. In: Intl. Conf. on Computational Intelligence & multimedia applications (ICCIMA 1998), Gippsland, Australia, pp. 101–106 (February 1998)Google Scholar
  7. 7.
    Hernández-Orallo, J.: Beyond the Turing Test. J. Logic, Language & Information 9(4), 447–466 (2000)zbMATHCrossRefGoogle Scholar
  8. 8.
    Hernández-Orallo, J.: Constructive reinforcement learning. International Journal of Intelligent Systems 15(3), 241–264 (2000)CrossRefGoogle Scholar
  9. 9.
    Hernández-Orallo, J.: On the computational measurement of intelligence factors. In: Meystel, A. (ed.) Performance metrics for intelligent systems workshop, pp. 1–8. National Institute of Standards and Technology, Gaithersburg, MD, U.S.A (2000)Google Scholar
  10. 10.
    Hernández-Orallo, J., Dowe, D.L.: Measuring universal intelligence: Towards an anytime intelligence test. Artificial Intelligence 174(18), 1508–1539 (2010)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Hernández-Orallo, J., Minaya-Collado, N.: A formal definition of intelligence based on an intensional variant of Kolmogorov complexity. In: Proc. Intl Symposium of Engineering of Intelligent Systems (EIS 1998), pp. 146–163. ICSC Press (1998)Google Scholar
  12. 12.
    Legg, S., Hutter, M.: Universal intelligence: A definition of machine intelligence. Minds and Machines 17(4), 391–444 (2007)CrossRefGoogle Scholar
  13. 13.
    Lewis, D.K., Shelby-Richardson, J.: Scriven on human unpredictability. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition 17(5), 69–74 (1966)Google Scholar
  14. 14.
    Oppy, G., Dowe, D.L.: The Turing Test. In: Zalta, E.N. (ed.) Stanford Encyclopedia of Philosophy, Stanford University, Stanford (2011), http://plato.stanford.edu/entries/turing-test/ Google Scholar
  15. 15.
    Salomon, D., Motta, G., Bryant, D.C.O.N.: Handbook of data compression. Springer-Verlag New York Inc., Heidelberg (2009)Google Scholar
  16. 16.
    Sanghi, P., Dowe, D.L.: A computer program capable of passing I.Q. tests. In: 4th International Conference on Cognitive Science (and 7th Australasian Society for Cognitive Science Conference), vol. 2, pp. 570–575. Univ. of NSW, Sydney, Australia (July 2003)Google Scholar
  17. 17.
    Sayood, K.: Introduction to data compression. Morgan Kaufmann, San Francisco (2006)Google Scholar
  18. 18.
    Scriven, M.: An essential unpredictability in human behavior. In: Wolman, B.B., Nagel, E. (eds.) Scientific Psychology: Principles and Approaches, pp. 411–425. Basic Books (Perseus Books), New York (1965)Google Scholar
  19. 19.
    Searle, J.R.: Minds, brains and programs. Behavioural and Brain Sciences 3, 417–457 (1980)CrossRefGoogle Scholar
  20. 20.
    Solomonoff, R.J.: A formal theory of inductive inference. Part I. Information and control 7(1), 1–22 (1964)MathSciNetzbMATHCrossRefGoogle Scholar
  21. 21.
    Sutton, R.S.: Generalization in reinforcement learning: Successful examples using sparse coarse coding. Advances in neural information processing systems, 1038–1044 (1996)Google Scholar
  22. 22.
    Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction. The MIT Press, Cambridge (1998)Google Scholar
  23. 23.
    Turing, A.M.: Computing machinery and intelligence. Mind 59, 433–460 (1950)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Veness, J., Ng, K.S., Hutter, M., Silver, D.: A Monte Carlo AIXI Approximation. Journal of Artificial Intelligence Research, JAIR 40, 95–142 (2011)zbMATHGoogle Scholar
  25. 25.
    Wallace, C.S.: Statistical and Inductive Inference by Minimum Message Length. Springer, Heidelberg (2005)zbMATHGoogle Scholar
  26. 26.
    Wallace, C.S., Boulton, D.M.: An information measure for classification. Computer Journal 11(2), 185–194 (1968)zbMATHGoogle Scholar
  27. 27.
    Wallace, C.S., Dowe, D.L.: Intrinsic classification by MML - the Snob program. In: Proc. 7th Australian Joint Conf. on Artificial Intelligence, pp. 37–44. World Scientific, Singapore (November 1994)Google Scholar
  28. 28.
    Wallace, C.S., Dowe, D.L.: Minimum message length and Kolmogorov complexity. Computer Journal 42(4), 270–283 (1999); Special issue on Kolmogorov complexityzbMATHCrossRefGoogle Scholar
  29. 29.
    Wallace, C.S., Dowe, D.L.: MML clustering of multi-state, Poisson, von Mises circular and Gaussian distributions. Statistics and Computing 10, 73–83 (2000)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • David L. Dowe
    • 1
  • José Hernández-Orallo
    • 2
  • Paramjit K. Das
    • 1
  1. 1.Computer Science and Software Engineering, Clayton School of Information TechnologyMonash UniversityAustralia
  2. 2.DSICUniversitat Politècnica de ValènciaValènciaSpain

Personalised recommendations