Skip to main content
Log in

The missing G

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Artificial general intelligence (AGI) is not a new notion, but it has certainly been gaining traction in recent years, and academic as well as industry resources are redirected to research in AGI. The main reason for this is that current AI techniques are limited as they are designed to operate in specific problem-domains, following meticulous preparation. These systems cannot operate in an unknown environment or under conditions of uncertainty, reuse knowledge gained in another problem domain, or autonomously learn and understand the problem-domain. We shall call AI systems capable of such feats artificial general intelligent (AGI) systems. The three tasks of this paper are to provide a working definition of the term AGI, examine the “missing G”, i.e., the set of abilities that current AI systems lack and whose implementation will result in a basic AGI system, and consider different approaches, including a hybrid one, to a comprehensive solution for an AGI.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. See Conference Series on Artificial General Intelligence. https://agi-conference.org/ Accessed Nov 2018.

  2. See MIT course page https://agi.mit.edu/. Accessed Nov 2018.

  3. Yann LeCun is a professor of computer science and the Director of AI research in Facebook.

  4. What is AGI? https://intelligence.org/2013/08/11/what-is-agi/ Accessed Nov 2018.

  5. This is only for illustration purposes. The reader should keep in mind that future AGI architectures might be quite different.

  6. Recent ideas regarding the way to train an agent to follow the right values involve Adversarial Training, i.e., training AI agents by letting them compete against other AI agents.

  7. See Hohwy 2013 and Clark 2016 for overviews.

  8. However, in recent decades, following works such as Brooks 1991, the use of internal representations in artificial systems such as robotic systems has been laid aside. Brooks had noticed that internal representations “get in the way” when building very simple intelligent systems, and therefore urged researchers to “use the world as its own model”.

  9. As Wilkenfeld also mentions, this point is similar to the one made by Woodward (2003) in his manipulationist accounts of causation and explanation, according to which causes can be thought of as devices for manipulating effects, and causal explanations include citing causal variables the alteration of which (in appropriately specified counterfactuals) would have affected the explanandum.

  10. Wilkenfeld tries to remain as neutral as possible regarding the question of what mental representations are, but commits himself to the assertion that they are, minimally, “computational structures with content that are susceptible to mental transformations” and to the assertion that this is “consistent with classic computationalism” (ibid.) Hence, we can assume that at least some versions of the Computational Theory of Mind comply with Wikenfield’s minimal description of mental representations.

  11. Object O is any object of understanding and it can include theories in physics, a certain proof in mathematics or logic, a person (as in, “I understand my friend”), a story or an event, an action, or a phrase in a language, to give some examples.

  12. Italicized concepts are rigorously defined in Thorisson et al. 2016, §3 and I did not see any point in reiterating these definitions here.

  13. In this paper, I employ the terms “abduction” and “abductive reasoning” in their more modern sense of justifying hypotheses. In this sense, abductive reasoning is often associated with “Inference to the Best Explanation” (in contrast to the historical sense, according to which “it refers to the place of explanatory reasoning in generating hypotheses” [Douven 2017: 1]).

  14. See also van Fraassen 1980: 143, who termed this problem “the best of a bad lot”.

  15. See McIlraith 1998, §2 and especially §3.

  16. Other definitions of best explanation can include additional criteria such as priority rankings or probabilities.

  17. See Boden 1998; 2004; 2014.

  18. An instance of creative thinking that does not involve an exploration or transformation of an existing conceptual space, but rather develops a new one, or creates it from scratch.

  19. This is due to the fact that there are examples of creativity without consciousness or emotions being exhibited. See Boden 2014, §4.

  20. Therefore, creativity defined by reference to ideas and artefacts must be intentional. See Boden 2014, §4, for an exception to this rule regarding artefacts such as artworks and poems.

  21. At this point, we should also mention the 4e (Embodied, Embedded, Extended, Enactive) cognition methodology, which is prevalent in robotics as it focuses on issues of embodiment of cognition. In general, proponents of this methodology claim that many (if not all) cognitive phenomena are in some sense dependent on the morphological, biological and physiological details of the agent’s body, its environment and its interaction with the environment. Thus, they claim that cognition involves extracarnial processes. This claim can take a strong and a weak form—the former suggests that cognitive processes are essentially based on extracarnial ones and the later suggests that they are only causally dependent on them. In this paper, I assume that the main requirements of a suitable weak interpretation of this methodology can be implemented within the two approaches—both can accommodate a sort of causal interaction with the environment. A discussion of the strong interpretation of 4e cognition as a separate approach is beyond the scope of this paper.

  22. See Paul Rosenbloom’s interview on lessons learned from Soar to Sigma: Paul Rosenbloom on Cognitive Architectures. https://intelligence.org/2013/09/25/paul-rosenbloom-interview/. Accessed Nov. 2018.

  23. See especially Rosenblum et al. 2016, Sect. 5.1 for a technical introduction of graphical models. In general, these models concern efficient computation with complex multivariate functions. This includes decomposing these functions into products of simpler functions, mapping the resulting decompositions onto graphs, and computing over these graphs via message passing or sampling algorithms. Graphical models provide working approaches to both symbol and signal processing, and to both logical and probabilistic reasoning.

  24. See Hawkins 2004.

  25. See Hawkins et al. 2017.

  26. The debate revolving the value of Connectionism as a view that hopes to explain intellectual abilities using artificial neural networks has been going on for several decades now. For a comprehensive overview, see Garson and Buckner 2019.

  27. This study was limited to the examination of pyramidal neurons.

  28. A grid-like cell is a type of neuron in the brains of many species that allows them to understand their position in space.

  29. See, for example, Chalmers 2006.

  30. See, for example, Taylor 2015; Howell 2009; and Kim 1998, and 2006.

  31. In general, it is difficult precisely to define similarity relations, but see Helman 1988, especially the chapters by Stuart Russell (“Analogy by Similarity”) and Ilkka Niiniluoto (“Analogy and Similarity in Scientific Reasoning”).

  32. As proponents of Biological Naturalism claim. See Searle 2007.

  33. See Garnelo and Shanahan (2019: 17).

  34. The Neuro-Symbolic Concept Learner is an example for such an hybrid system. See Mao et al. 2019.

References

  • Anderson M (2011) Reduction considered harmful. https://hplusmagazine.com/2011/03/31/reduction-considered-harmful/. Accessed July 2018

  • Banino A, Barry C, Uria B, Blundell C, Lillicrap T, Mirowski P, Kumaran D (2018) Vector-based navigation using grid-like representations in artificial agents. Nature 557:429–433

    Article  Google Scholar 

  • Boden MA (1998) Creativity and artificial intelligence. Artif Intell 103:347–356

    Article  MathSciNet  Google Scholar 

  • Boden MA (2004) Creativity in a nutshell. In: Boden MA (ed) The creative mind: myths and mechanisms. Routledge, London, pp 1–10

    Chapter  Google Scholar 

  • Boden MA (2014) Creativity and artificial intelligence: a contradiction in terms? In: Paul ES, Kaufman SB (eds) The philosophy of creativity: new essays. Oxford University Press, Oxford

  • Brooks RA (1991) Intelligence without representation. Artif Intell 47:139–159

    Article  Google Scholar 

  • Chalmers D (2006) Strong and weak emergence. In: Clayton P, Davies P (eds) The re-emergence of emergence. Oxford University Press, Oxford, pp 244–255

    Google Scholar 

  • Clark A (2016) Surfing uncertainty. Oxford University Press, Oxford

    Book  Google Scholar 

  • Douven I (2017) Abduction. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/abduction/. Accessed Nov 2018

  • Garnelo M, Shanahan M (2019) Reconciling deep learning with symbolic artificial intelligence: representing objects and relations. Curr Opin Behav Sci 29:17–23

    Article  Google Scholar 

  • Garson C, Buckner J (2019) Connctionism. Stanford encyclopedia of philosophy. https://plato.stanford.edu/entries/connectionism/. Accessed Dec 2019

  • Goertzel B (2006) The hidden pattern. Brown Walker Press, New York

    Google Scholar 

  • Grimm S (2011) Understanding. In: Bernecker S, Pritchard D (eds) The routledge companion to epistemology. Routledge, New York.

  • Hawkins J (2004) On intelligence. Times Books, New York

    Google Scholar 

  • Hawkins J, Ahmad S, Cui Y (2017) A theory of how columns in the neocortex enable learning the structure of the world. Front Neural Circuits 11:81

    Article  Google Scholar 

  • Helman DH (1988) Analogical reasoning. Springer, Dordrecht

    Book  Google Scholar 

  • Hohwy J (2013) The predictive mind. Oxford University Press, Oxford

    Book  Google Scholar 

  • Howell R (2009) Emergentism and supervenience physicalism. Australas J Philos 87:83–98

    Article  Google Scholar 

  • Kim J (1998) Mind in a physical world. MIT Press, Cambridge

    Book  Google Scholar 

  • Kim J (2006) Essays in metaphysics of mind. Oxford University Press, Oxford

    Google Scholar 

  • Langley P (2012) The cognitive systems paradigm. Adv Cognit Syst 1:3–13

    Google Scholar 

  • Legg S, Hutter M (2006) A formal measure of machine intelligence. In: Proc. 15th annual machine learning conference of Belgium and the Netherlands (Benelearn’06), pp. 73–80

  • Mao J, Gan C, Kohli P, Tenenbaum JB, Wu J (2019) The neuro-symbolic concept learner: interpreting scenes, words, and sentences from natural supervision. ICLR 2019 https://arxiv.org/abs/1904.12584v1. Accessed Dec 2019

  • McIlraith SA (1998) Logic-based abductive inference. https://www.cs.utoronto.ca/kr/papers/abduction.pdf. Accessed Nov 2018

  • Novitz D (1999) Creativity and constraint. Australas J Philos 77(1):67–82

    Article  Google Scholar 

  • Pritchard D (2009) Knowledge, understanding and epistemic value. R Inst Philos Suppl 64:19–43

    Article  Google Scholar 

  • Riggs W (2003) Understanding virtue and the virtue of understanding. In: DePaul M, Zagzebski L (eds) Intellectual virtue: perspectives from ethics and epistemology. Oxford University Press, Oxford

  • Rosenbloom P, Demski A, Ustun V (2016) The sigma cognitive architecture and system: towards functionally elegant grand unification. J Artif Gen Intelli 7(1):1–103

    Article  Google Scholar 

  • Sardi S, Vardi R, Sheinin A, Goldental A, Kanter I (2017) New types of experiments reveal that a neuron functions as multiple independent threshold units. Sci Rep 7:18036

    Article  Google Scholar 

  • Searle J (2007) Biological naturalism. In: Velmans M, Schneider S (eds) The blackwell companion to consciousness, Malden, MA. Blackwell Pub, Oxford, pp 325–334

    Chapter  Google Scholar 

  • Taylor E (2015) Collapsing emergence. Philos Q 65:732–753

    Article  Google Scholar 

  • Thórisson KR, Kremelberg D, Steunebrink BR, Nivel E (2016) About understanding. In: International conference on artificial general intelligence, Springer, New York, pp. 106–117

  • Van Fraassen BC (1980) The scientific image. Oxford University Press, Oxford

    Book  Google Scholar 

  • Voss P (2005) Essentials of general intelligence: the direct path to AGI. In: Goertzel B, Pennachin C (eds) Artificial general intelligence. Springer-Verlag, Berlin

  • Wilkenfeld D (2013) Understanding as representation manipulability. Synthese 190(6):997–1016

    Article  Google Scholar 

  • Woodward J (2003) Making things happen: a theory of causal explanation. Oxford University Press, Oxford

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Erez Firt.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Firt, E. The missing G. AI & Soc 35, 995–1007 (2020). https://doi.org/10.1007/s00146-020-00942-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-020-00942-y

Keywords

Navigation