Skip to main content
Log in

On Potential Cognitive Abilities in the Machine Kingdom

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

Animals, including humans, are usually judged on what they could become, rather than what they are. Many physical and cognitive abilities in the ‘animal kingdom’ are only acquired (to a given degree) when the subject reaches a certain stage of development, which can be accelerated or spoilt depending on how the environment, training or education is. The term ‘potential ability’ usually refers to how quick and likely the process of attaining the ability is. In principle, things should not be different for the ‘machine kingdom’. While machines can be characterised by a set of cognitive abilities, and measuring them is already a big challenge, known as ‘universal psychometrics’, a more informative, and yet more challenging, goal would be to also determine the potential cognitive abilities of a machine. In this paper we investigate the notion of potential cognitive ability for machines, focussing especially on universality and intelligence. We consider several machine characterisations (non-interactive and interactive) and give definitions for each case, considering permanent and temporal potentials. From these definitions, we analyse the relation between some potential abilities, we bring out the dependency on the environment distribution and we suggest some ideas about how potential abilities can be measured. Finally, we also analyse the potential of environments at different levels and briefly discuss whether machines should be designed to be intelligent or potentially intelligent.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. We use the term ‘computably realisable’ to express that there is at least one (Turing) machine with the property. This does not mean that determining whether a machine has the property is decidable. All this will be further clarified in Section “Properties, Universality and Preservation”.

  2. Possibly even earlier by Martin-Löf, from Levin’s personal communication.

  3. Note the difference with the concept of “universal probability” (distribution), as introduced by Solomonoff (1964).

  4. In the first part of the paper we will consider non-interactive (classical) Turing machines, while in the second part we will refer to the set of interactive (and resource-bounded) machines, as done by Hernández-Orallo et al. (2012a).

  5. The first digit of each number would be generated uniformly from 1 to 9 and the remaining digits would each be generated uniformly from 0 to 9.

  6. In fact, the distinction between potentiality and actuality can be traced back to Aristotle’s Methaphysics, in book \(\Uptheta\) (IX) (Aristotle, by Ross, 1924), with his distinction between potentiality (dunamis) and actuality (entelecheia or energeia). In part 6 he says: “potentially, for instance, a statue of Hermes is in the block of wood […], and we call even the man who is not studying a man of science, if he is capable of studying”.

  7. Section 7 of Turing’s (1950) paper, entitled “Learning Machines”, says: “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain. […] We have thus divided our problem into two parts. The child programme and the education process. […] This process could follow the normal teaching of a child.”

  8. We follow the definition of self-delimiting machines by Li and Vitányi (2008, p.201) and the equivalent definition of prefix TM by Hutter (2005, p.35).

  9. Technically, in the general case, both U and M must be prefix-free. However, since we are assuming self-delimiting Turing machines, all this is ensured.

  10. Since universality is 0 for any null machine, to show that it is genuine we need only to show that it is non-vanishing. This follows from the definition of universality probability as a limit (from Dowe 2008, footnote 70) which we know (from Barmpalias and Dowe 2012, Theorem 2.4 or alternatively from text in and surrounding our footnote 11) to have a lower bound greater than 0.

  11. A simpler proof of Barmpalias and Dowe (2012, Theorem 2.4 and Corollary 2.7) was given by Leonid Levin (Barmpalias and Dowe 2012, p.3499), but an even simpler proof is based on the fact that a 1-dimensional fair (50 %:50 %) random walk will pass any given point infinitely often from either direction. From there, we sketch this proof. Consider a recursive enumeration of UTMs \(T_1, \dots, T_i, \dots\) (which might or might not be identical) and a monotonically increasing recursive function \({g: {\mathbb{N}} \rightarrow {\mathbb{N}}}\) such that g(1) ≥ 1 and for all i ≥ 1 we have g(i + 1) > g(i) + 1. We define a UTM, U, as follows. For a given string, x, let \(j_{x, 1} < j_{x, 2} < \dots\) be the smallest values of j (in ascending order) such that the first 2j bits of x contain j 0s and j 1s. If there is a k and an i such that j x, k  = g(i), then choose the smallest such k and i, and the first 2j x, k bits of x are used to get U to emulate/become T i , and the subsequent bits of x are the input to T i . (This defines UTM, U.) We can make the universality probability of U arbitrarily close to 1 by setting g(1) large enough and having g grow sufficiently rapidly. Since the set { m / 2n : 1 ≤ n, 1 < m < 2n } is dense in the open interval (0, 1), it also follows that the set of universality probabilities of UTMs is dense in [0, 1].

  12. The universality probability and the halting probability are very special cases. Once a machine has halted, it can never re-start. Once a machine has lost its universality, it can never again be universal.

  13. We have used point potential here, but this could be extended to period potential.

  14. There are n-universal machines which can resume their original ‘state’ after n input bits.

  15. Searle’s Chinese room elaborates on this for a different (and arguably misleading) purpose.

  16. For probabilistic environments and agents the notion of emulation (which we will see next) would be somewhat more challenging, understood as a probabilistic expectation rather than in terms of exact values.

  17. The same seems to apply to environments. While the notion of universal environment is appealing, the inclusion of time makes this notion more general (but also infeasible).

  18. Extending this for all possible computable agents would depend on whether they are probabilistic or deterministic, and resource-bounded or not. We can just consider a distribution over all computable agents as defined in Section “Potential Abilities of Interactive Agents”.

  19. For example, if we redundantly code 0 and 1 as 01 and 10 respectively, if M is a machine and M R is its redundant counter-part, and (say) M(100) = 0110, then M R (100101) = 01101001.

  20. In fact, Conway’s game of life Gardner 1970 is a very simple ‘universe’ and can contain universal computers. Given an appropriate ‘big bang’ (i.e., a start-up configuration of the cells), it has been shown that Conway’s game of life can contain a universal Turing machine.

  21. Recall the related quotations from Turing (1950, Section 7) in our footnote 7.

References

  • Amari, S., Fujita, N., Shinomoto, S. (1992). Four types of learning curves. Neural Computation 4(4), 605–618.

    Article  Google Scholar 

  • Aristotle (Translation, Introduction, and Commentary by Ross, W.D.) (1924). Aristotle’s Metaphysics. Oxford: Clarendon Press.

  • Barmpalias, G. & Dowe, D. L. (2012). Universality probability of a prefix-free machine. Philosophical transactions of the Royal Society A [Mathematical, Physical and Engineering Sciences] (Phil Trans A), Theme Issue ‘The foundations of computation, physics and mentality: The Turing legacy’ compiled and edited by Barry Cooper and Samson Abramsky, 370, pp 3488–3511.

  • Chaitin, G. J. (1966). On the length of programs for computing finite sequences. Journal of the Association for Computing Machinery, 13, 547–569.

    Article  MathSciNet  MATH  Google Scholar 

  • Chaitin, G. J. (1975). A theory of program size formally identical to information theory. Journal of the ACM (JACM), 22(3), 329–340.

    Article  MathSciNet  MATH  Google Scholar 

  • Dowe, D. L. (2008, September). Foreword re C. S. Wallace. Computer Journal, 51(5):523–560, Christopher Stewart WALLACE (1933–2004) memorial special issue.

    Google Scholar 

  • Dowe, D. L. (2011). MML, hybrid Bayesian network graphical models, statistical consistency, invariance and uniqueness. In: P. S. Bandyopadhyay, M. R. Forster (Eds), Handbook of the philosophy of science—Volume 7: Philosophy of statistics (pp. 901–982). Amsterdam: Elsevier.

    Chapter  Google Scholar 

  • Dowe, D. L. & Hajek, A. R. (1997a). A computational extension to the turing test. Technical report #97/322, Dept Computer Science, Monash University, Melbourne, Australia, 9 pp, http://www.csse.monash.edu.au/publications/1997/tr-cs97-322-abs.html.

  • Dowe, D. L. & Hajek, A. R. (1997b, September). A computational extension to the Turing Test. in Proceedings of the 4th conference of the Australasian Cognitive Science Society, University of Newcastle, NSW, Australia, 9 pp.

  • Dowe, D. L. & Hajek, A. R. (1998, February). A non-behavioural, computational extension to the Turing Test. In: International conference on computational intelligence and multimedia applications (ICCIMA’98), Gippsland, Australia, pp 101–106.

  • Dowe, D. L., Hernández-Orallo, J. (2012). IQ tests are not for machines, yet. Intelligence, 40(2), 77–81.

    Article  Google Scholar 

  • Gallistel, C. R., Fairhurst, S., & Balsam, P. (2004). The learning curve: Implications of a quantitative analysis. In Proceedings of the National Academy of Sciences of the United States of America, 101(36), 13124–13131.

  • Gardner, M. (1970). Mathematical games: The fantastic combinations of John Conway’s new solitaire game “life”. Scientific American, 223(4), 120–123.

    Article  Google Scholar 

  • Goertzel, B. & Bugaj, S. V. (2009). AGI preschool: A framework for evaluating early-stage human-like AGIs. In Proceedings of the second international conference on artificial general intelligence (AGI-09), pp 31–36.

  • Hernández-Orallo, J. (2000a). Beyond the Turing Test. Journal of Logic, Language & Information, 9(4), 447–466.

    Article  MATH  Google Scholar 

  • Hernández-Orallo, J. (2000b). On the computational measurement of intelligence factors. In A. Meystel (Ed), Performance metrics for intelligent systems workshop (pp 1–8). Gaithersburg, MD: National Institute of Standards and Technology.

  • Hernández-Orallo, J. (2010). On evaluating agent performance in a fixed period of time. In M. Hutter et al. (Eds.), Proceedings of 3rd international conference on artificial general intelligence (pp. 25–30). New York: Atlantis Press.

  • Hernández-Orallo, J., & Dowe, D. L. (2010). Measuring universal intelligence: Towards an anytime intelligence test. Artificial Intelligence, 174(18), 1508–1539.

    Article  MathSciNet  Google Scholar 

  • Hernández-Orallo, J. & Dowe, D. L. (2011, April). Mammals, machines and mind games. Who’s the smartest?. The conversation, http://theconversation.edu.au/mammals-machines-and-mind-games-whos-the-smartest-566.

  • Hernández-Orallo J., Dowe D. L., España-Cubillo S., Hernández-Lloreda M. V., & Insa-Cabrera J. (2011). On more realistic environment distributions for defining, evaluating and developing intelligence. In: J. Schmidhuber, K. R. Thórisson, & M. Looks (Eds.), Artificial general intelligence 2011, volume 6830, LNAI series, pp. 82–91. New York: Springer.

  • Hernández-Orallo, J., Dowe, D. L., & Hernández-Lloreda, M. V. (2012a, March). Measuring cognitive abilities of machines, humans and non-human animals in a unified way: towards universal psychometrics. Technical report 2012/267, Faculty of Information Technology, Clayton School of I.T., Monash University, Australia.

  • Hernández-Orallo, J., Insa, J., Dowe, D. L., & Hibbard, B. (2012b). Turing tests with Turing machines. In A. Voronkov (Ed.), The Alan Turing centenary conference, Turing-100, Manchester, volume 10 of EPiC Series, pp 140–156.

  • Hernández-Orallo, J., & Minaya-Collado, N. (1998). A formal definition of intelligence based on an intensional variant of Kolmogorov complexity. In Proceedings of the international symposium of engineering of intelligent systems (EIS’98) (pp 146–163). Switzerland: ICSC Press.

  • Herrmann, E., Call, J., Hernández-Lloreda, M. V., Hare, B., & Tomasello, M. (2007). Humans have evolved specialized skills of social cognition: The cultural intelligence hypothesis. Science, 317(5843), 1360–1366.

    Google Scholar 

  • Herrmann, E., Hernández-Lloreda, M. V., Call, J., Hare, B., & Tomasello, M. (2010). The structure of individual differences in the cognitive abilities of children and chimpanzees. Psychological Science, 21(1), 102–110.

    Article  Google Scholar 

  • Horn, J. L., & Cattell, R. B. (1966). Refinement and test of the theory of fluid and crystallized general intelligences. Journal of educational psychology, 57(5), 253.

    Article  Google Scholar 

  • Hutter, M. (2005). Universal artificial intelligence: Sequential decisions based on algorithmic probability. New York: Springer.

    MATH  Google Scholar 

  • Insa-Cabrera, J., Dowe, D. L., España, S., Hernández-Lloreda, M. V., & Hernández-Orallo, J. (2011a). Comparing humans and AI agents. In AGI: 4th conference on artificial general intelligence—Lecture Notes in Artificial Intelligence (LNAI), volume 6830, pp 122–132. Springer, New York.

  • Insa-Cabrera, J., Dowe, D. L., & Hernández-Orallo, J. (2011b). Evaluating a reinforcement learning algorithm with a general intelligence test. In CAEPIA—Lecture Notes in Artificial Intelligence (LNAI), volume 7023, pages 1–11. Springer, New York.

  • Kearns, M. & Singh, S. (2002). Near-optimal reinforcement learning in polynomial time. Machine Learning, 49(2), 209–232.

    Article  MATH  Google Scholar 

  • Kolmogorov, A. N. (1965). Three approaches to the quantitative definition of information. Problems of Information Transmission, 1, 4–7.

    MathSciNet  Google Scholar 

  • Legg, S. (2008, June). Machine super intelligence. Department of Informatics, University of Lugano.

  • Legg, S. & Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds and Machines, 17(4), 391–444.

    Article  Google Scholar 

  • Legg, S., & Veness, J. (2012). An approximation of the universal intelligence measure. In Proceedings of Solomonoff 85th memorial conference. New York: Springer.

  • Levin, L. A. (1973). Universal sequential search problems. Problems of Information Transmission, 9(3), 265–266.

    Google Scholar 

  • Li, M., Vitányi, P. (2008). An introduction to Kolmogorov complexity and its applications (3rd ed). New York: Springer.

    Book  MATH  Google Scholar 

  • Little, V. L., & Bailey, K. G. (1972). Potential intelligence or intelligence test potential? A question of empirical validity. Journal of Consulting and Clinical Psychology, 39(1), 168.

    Article  Google Scholar 

  • Mahoney, M. V. (1999). Text compression as a test for artificial intelligence. In Proceedings of the national conference on artificial intelligence, AAAI (pp. 486–502). New Jersey: Wiley.

  • Mahrer, A. R. (1958). Potential intelligence: A learning theory approach to description and clinical implication. The Journal of General Psychology, 59(1), 59–71.

    Article  Google Scholar 

  • Oppy, G., & Dowe, D. L. (2011). The Turing Test. In E. N. Zalta (Ed.), Stanford encyclopedia of philosophy. Stanford University. http://plato.stanford.edu/entries/turing-test/.

  • Orseau, L. & Ring, M. (2011). Self-modification and mortality in artificial agents. In AGI: 4th conference on artificial general intelligence—Lecture Notes in Artificial Intelligence (LNAI), volume 6830, pages 1–10. Springer, New York.

  • Ring, M. & Orseau, L. (2011). Delusion, survival, and intelligent agents. In AGI: 4th conference on artificial general intelligence—Lecture Notes in Artificial Intelligence (LNAI), volume 6830, pp. 11–20. Springer, New York.

  • Schaeffer, J., Burch, N., Bjornsson, Y., Kishimoto, A., Muller, M., Lake, R., et al. (2007). Checkers is solved. Science, 317(5844), 1518.

    MathSciNet  MATH  Google Scholar 

  • Solomonoff, R. J. (1962). Training sequences for mechanized induction. In M. Yovits, G. Jacobi, & G. Goldsteins (Eds.), Self-Organizing Systems, 7, 425–434.

  • Solomonoff, R. J. (1964). A formal theory of inductive inference. Information and Control, 7(1–22), 224–254.

    Google Scholar 

  • Solomonoff, R. J. (1967). Inductive inference research: Status, Spring 1967. RTB 154, Rockford Research, Inc., 140 1/2 Mt. Auburn St., Cambridge, Mass. 02138, July 1967.

  • Solomonoff, R. J. (1978). Complexity-based induction systems: comparisons and convergence theorems. IEEE Transactions on Information Theory, 24(4), 422–432.

    Article  MathSciNet  MATH  Google Scholar 

  • Solomonoff, R. J. (1984). Perfect training sequences and the costs of corruption—A progress report on induction inference research. Oxbridge research.

  • Solomonoff, R. J. (1985). The time scale of artificial intelligence: Reflections on social effects. Human Systems Management, 5, 149–153.

    Google Scholar 

  • Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge: The MIT press.

    Google Scholar 

  • Thorp, T. R., & Mahrer, A. R. (1959). Predicting potential intelligence. Journal of Clinical Psychology, 15(3), 286–288.

    Article  Google Scholar 

  • Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460.

    Article  MathSciNet  Google Scholar 

  • Veness, J., Ng, K. S., Hutter, M., & Silver, D. (2011). A Monte Carlo AIXI approximation. Journal of Artificial Intelligence Research, JAIR, 40, 95–142.

    MathSciNet  MATH  Google Scholar 

  • Wallace, C. S. (2005). Statistical and inductive inference by minimum message length. New York: Springer.

    MATH  Google Scholar 

  • Wallace, C. S., & Boulton, D. M. (1968). An information measure for classification. Computer Journal, 11, 185–194.

    Article  MATH  Google Scholar 

  • Wallace, C. S., & Dowe, D. L. (1999a). Minimum message length and Kolmogorov complexity. Computer Journal 42(4), 270–283.

    Article  MATH  Google Scholar 

  • Wallace, C. S., & Dowe, D. L. (1999b). Refinements of MDL and MML coding. Computer Journal, 42(4), 330–337.

    Article  MATH  Google Scholar 

  • Woergoetter, F., & Porr, B. (2008). Reinforcement learning. Scholarpedia, 3(3), 1448.

    Article  Google Scholar 

  • Zvonkin, A. K., & Levin, L. A. (1970). The complexity of finite objects and the development of the concepts of information and randomness by means of the theory of algorithms. Russian Mathematical Surveys, 25, 83–124.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

We thank the anonymous reviewers for their comments, which have helped to significantly improve this paper. This work was supported by the MEC-MINECO projects CONSOLIDER-INGENIO CSD2007-00022 and TIN 2010-21062-C02-02, GVA project PROMETEO/2008/051, the COST - European Cooperation in the field of Scientific and Technical Research IC0801 AT. Finally, we thank three pioneers ahead of their time(s). We thank Ray Solomonoff (1926–2009) and Chris Wallace (1933–2004) for all that they taught us, directly and indirectly. And, in his centenary year, we thank Alan Turing (1912–1954), with whom it perhaps all began.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to José Hernández-Orallo.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hernández-Orallo, J., Dowe, D.L. On Potential Cognitive Abilities in the Machine Kingdom. Minds & Machines 23, 179–210 (2013). https://doi.org/10.1007/s11023-012-9299-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-012-9299-6

Keywords

Navigation