Abstract
This chapter analyzes some aspects of the ontology of AI. It starts with AI definitions, it moves onto the main categories of weak and strong AI, then to the machine-learning procedure, to the puzzling issue of consciousness and last but not least to the quest for friendly AI. The scope of the chapter is to emphasize on the constituents of AI which give us an insight about its potential evolution, its autonomy and unpredictability. In this sense, the scope of the chapter is not to give a complete analysis of the ontology of AI but an analysis of these elements, which prove why legal regulation is necessary.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The Editors of Encyclopaedia Britannica, Methodic doubt, Philosophy. https://www.britannica.com/topic/methodic-doubt. Accessed 14 June 2020.
- 2.
Grossmann (1983).
- 3.
Busse et al. (2015), pp. 29–41.
- 4.
Gruber (1993), pp. 199–220.
- 5.
Uschold and Gruninger (1996).
- 6.
Herre et al. (2006).
- 7.
Pickert. Einf¨uhrungin Ontologien. http://www.dbis.informatik.hu-berlin.de/dbisold/lehre/WS0203/SemWeb/artikel/2/PickertOntologienfinal.pdf
- 8.
Busse et al. (2015), p. 29.
- 9.
They are also presented in the introduction.
- 10.
Chandrasekaran (1990), p. 14.
- 11.
- 12.
Russell and Norvig (2013).
- 13.
Poole and Mackworth (2010).
- 14.
Rich and Knight (1991).
- 15.
Charniak and McDermott (1985).
- 16.
Noyes. 5 things you need to know about A.I.: Cognitive, neural and deep, oh my! Computerworld (Mar. 3, 2016, 12:49 pm). http://www.computerworld.com/article/3040563/enterprise-applications/5-things-you-need-toknow-about-ai-cognitive-neural-anddeep-oh-my.html; http://perma.cc/7PW9-P42G. Accessed 25 June 2018.
- 17.
Scherer (2016), pp. 363–364.
- 18.
Scherer (2016), p. 360.
- 19.
Omohundro (2008).
- 20.
Russell and Norvig (2010), pp. 2–3.
- 21.
Brownstein et al. (1983), p. 169.
- 22.
Laton (2016), p. 94.
- 23.
Russell and Norvig (2010), p. 3.
- 24.
McCarthy (2007) What is artificial intelligence? STAN. http://www-formal.stanford.edu/jmc/whatisai/whatisai.html.
- 25.
McCarthy and Stanford University Formal Reasoning Group (2007) What is artificial intelligence|basic questions. Formal Reasoning Group. http://www-formal.stanford.edu/jmc/whatisai/node1.html.
- 26.
Legg and Hutter (2007) A collection of definitions of intelligence. arXiv:0706.3639 [Cs]. http://arxiv.org/abs/0706.3639.
- 27.
Legg and Hutter. A collection of definitions of intelligence, at p. 9
- 28.
Albus (1991), p. 473.
- 29.
Hern. What is the Turing test? And are we all doomed now? The Guardian. https://www.theguardian.com/technology/2014/jun/09/what-is-the-alan-turing-test. Accessed 18 June 2020.
- 30.
“Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese…
Cole. The Chinese room argument. In: Zalta EN (ed) The Stanford encyclopedia of philosophy, Spring 2020 ed. https://plato.stanford.edu/archives/spr2020/entries/chinese-room/.
- 31.
Cole. The Chinese room argument.
- 32.
- 33.
Russell and Norvig (2010).
- 34.
Wisskirchen et al. (2010), p. 10.
- 35.
Urban (2015) The AI revolution: the road to superintelligence. Wait but why. https://www.waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html. Accessed 28 June 2018.
- 36.
The applications are ever-expanding in fact: “speech and language recognition of the Siri virtual assistant on the Apple iPhone…interpreting video feeds from drones carrying out visual inspections of infrastructure such as oil pipelines, organizing personal and business calendars, responding to simple customer-service queries, co-ordinating with other intelligent systems to carry out tasks like booking a hotel at a suitable time and location, helping radiologists to spot potential tumors in X-rays, flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices…”
Heath (2018) What is AI? Everything you need to know about artificial intelligence. ZDNet. https://www.zdnet.com/article/what-is-ai-everything-you-need-to-know-about-artificial-intelligence. Accessed 1 February 2019.
- 37.
Bekey (2005).
- 38.
Heath. What is AI? Everything you need to know about artificial intelligence.
- 39.
Goertzel and Pennachin (2007) Artificial general intelligence vi.
- 40.
Bostrom (2014), p. 23
- 41.
- 42.
Gigerenzer et al. (1999).
- 43.
Tal (2018) Forecast|How the first artificial general intelligence will change society: future of artificial intelligence P2. Quantumrun special series. https://www.quantumrun.com/prediction/first-artificial-general-intelligence-society-future. Accessed 3 February 2019.
- 44.
What is still missing is the “raw computing power” but there are several other ways that the gap in these regards could close.
Snyder-Beattie and Dewey (2014) Explainer: what is superintelligence? The conversation. https://theconversation.com/explainer-what-is-superintelligence-29175. Accessed 1 February 2019.
- 45.
Ibid.
- 46.
Moravec (1976) The role of raw power in intelligence. www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html. Accessed 14 July 2019.
- 47.
Bostrom (2012), pp. 103–130.
- 48.
Bostrom (2014).
- 49.
Bostrom. How long before superintelligence? Oxford Future of Humanity Institute, Faculty of Philosophy & Oxford Martin School, University of Oxford. https://nickbostrom.com/superintelligence.html. Accessed 02 February 2019.
- 50.
Bostrom (2014), pp. 40–60.
- 51.
Shanahan (2010).
- 52.
Bostrom (2014), pp. 40–60.
- 53.
Good (1965).
- 54.
Hawking et al. Transcending complacency on superintelligent machines. Huffpost. https://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html?ec_carp=3359844804041712164. Accessed 1 February 2019.
- 55.
Vinge (1993) The coming technological singularity: how to survive in the post-human era. In: Vision-21: interdisciplinary science and engineering in the era of cyberspace, vol. 11, pp 12–14. https://perma.cc/6UY3-C2RJ. Accessed 10 February 2019.
- 56.
Guihota et al. (2017), pp. 385–394.
- 57.
Yudkowsky (1996) Staring at the singularity. http://yudkowsky.net/obsolete/singularity.html.
- 58.
Vinge (1993).
Kurzweil (2005).
Yudkowsky (2007) Three major singularity schools. http://yudkowsky.net/singularity/schools.
- 59.
Chalmers (2010), pp. 7–9.
- 60.
Vinge (1993) The coming technological singularity: how to survive in the post-human era. https://edoras.sdsu.edu/vinge/misc/singularity.html. Accessed 1 August 2019.
- 61.
Wallach (2016), p. 297.
- 62.
Yudkowsky. Three major singularity schools. http://yudkowsky.net/singularity/schools/. Accessed 31 July 2019.
- 63.
McCarthy (2008), pp. 2003–2011.
Lake et al (2016) Building machines that learn and think like people. Center for brains, minds, and machines memo No. 046, at 7. http://www.mit.edu/tomeru/papers/machines_that_think.pdf.
- 64.
Turing (1950), pp. 433–460.
- 65.
Faggella. What is machine learning? Emerj. https://emerj.com/ai-glossary-terms/what-is-machine-learning/. Accessed 2 February 2019.
- 66.
Copeland (2016) What’s the difference between artificial intelligence, machine learning, and deep learning? nvidia. https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/. Accessed 2 February 2019.
- 67.
Kowert. The foreseeability of human-artificial intelligence interactions, p. 183.
Tanz (2016) Soon we won’t program computers. We’ll train them like dogs. WIRED. https://www.wired.com/2016/05/the-end-of-code/. Accessed 24 June 2018.
Scherer (2016), p. 365.
Cuellar (2017), pp. 27–33.
- 68.
Schuller. At the crossroads of control: the intersection of artificial intelligence in autonomous weapon systems with International Humanitarian Law, at p. 396.
- 69.
Khoury (2017), pp. 635–640
- 70.
Tito (2017), pp. 7–8.
- 71.
UNIDIR. Autonomous weapon systems: implications of increasing autonomy in the critical functions of weapons - ref. 4283-ebook, Dr. Ludovic Righetti, Max Planck Institute for Intelligent Systems, Germany, Emerging technology and future autonomous weapons, at p. 37.
- 72.
Davis and Marcus (2015), pp. 92–93.
- 73.
“The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040–2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter”.
Müller and Bostrom (2016), p. 553.
Machine learning: what it is and why it matters. SAS Institute. http://www.sas.com/en_us/insights/analytics/machine-learning.html. Archived at https://perma.cc/X5VD-4WPW.
- 74.
Karnow (2016), p. 53.
- 75.
Marra and McNeil (2013), pp. 1139–1145.
Schuller. At the crossroads of control: the intersection of artificial intelligence in autonomous weapon systems with International Humanitarian Law, p. 404.
- 76.
Copeland. What’s the difference between artificial intelligence, machine learning, and deep learning?
- 77.
De Spiegeleire et al. Artificial intelligence and the future of defense, p. 41.
- 78.
Nadkarni et al. (2011), p. 544.
Liddy (2001) Natural language processing, surface (Syracuse Univ. Research Facility and Collaborative Env’t). http://surface.syr.edu/cgi/viewcontent.cgi?article=1043&context=istpub.
- 79.
“An artificial neural network transforms input data by applying a nonlinear function to a weighted sum of the inputs. The transformation is known as a neural layer and the function is referred to as a neural unit. The intermediate outputs of one layer, called features, are used as the input into the next layer. The neural network through repeated transformations learns multiple layers of nonlinear features (like edges and shapes), which it then combines in a final layer to create a prediction (of more complex objects). The neural net learns by varying the weights or parameters of a network so as to minimize the difference between the predictions of the neural network and the desired values. This phase where the artificial neural network learns from the data is called training.”
Nvidia. Artificial Neural Networks. https://developer.nvidia.com/discover/artificial-neural-network. Accessed 02 February 2019.
- 80.
Artificial Intelligence Blog. DL algorithms: deep belief networks (DBN). https://www.artificial-intelligence.blog/education/dl-algorithms-deep-belief-networks-dbn. Accessed 28 April 2019.
- 81.
Bostrom (2016), pp. 7–8.
- 82.
Bostrom (2016), p. 8.
- 83.
World Economic Forum (2019) White paper, AI governance: a holistic approach to implement ethics into AI, p. 6. https://www.weforum.org/whitepapers/ai-governance-a-holistic-approach-to-implement-ethics-into-ai. Accessed 18 October 2019.
- 84.
Castelvecchi (2016), p. 22.
UK Government Office for Science (2015) Artificial intelligence: opportunities and implications for the future of decision making, p 5. Available at https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/566075/gs-16-19-artificial-intelligenceai-report.pdf.
- 85.
Bostrom (2014), p. 9.
- 86.
Bostrom (2014), p. 9.
- 87.
Bostrom (2014), p. 10.
- 88.
Bostrom (2014), p. 29.
- 89.
inFERENce (2015) The two kinds of uncertainty an AI agent has to represent. https://www.inference.vc/the-two-kinds-of-uncertainties-in-reinforcement-learning-2/. Accessed 06 July 2019.
- 90.
inFERENce (2015) The two kinds of uncertainty an AI agent has to represent.
- 91.
Bostrom (2014), p. 10.
- 92.
Thomason R. Logic and artificial intelligence. Stanford encyclopedia of philosophy. https://perma.cc/3RPH-PVKV. Accessed 29 August 2018.
- 93.
Thomason. Logic and artificial intelligence.
- 94.
Hutter (2010), pp. 125–126.
Hallevy (2018) Dangerous robots – artificial intelligence vs. human intelligence. Available at: https://ssrn.com/abstract=3121905. Accessed 19 July 2018.
- 95.
Yanisky-Ravid and Liu. When artificial intelligence systems produce inventions: the 3A era and an alternative model for patent law, p. 7.
Camett and Heinz (2006) John Koza built an invention machine. Popular Science. www.popsci.com/scitech/article/2006-04/john-koza-has-built-invention-machine. Accessed 16 September 2018.
Hallevy. Dangerous robots – artificial intelligence vs. human intelligence, p. 6.
Suchman and Weber (2016), pp. 39–40.
- 96.
Big data: what it is and why it matters. SAS INSTITUTE. http://www.sas.com/en_us/insights/big-data/what-is-bigdata.html. Accessed 26 September 2016.
Hilbert M and Lopez P (2011) The world’s technological capacity to store, communicate, and compute information: tracking the global capacity of 60 analog and digital technologies during the period from 1986 to 2007. martinhilbert.net. http://www.martinhilbert.net/WorldInfoCapacity.html. Accessed 26 September 2016.
- 97.
Tegmark (2017), pp. 140–141.
- 98.
Quellette (2018) Move over AlphaGo: AlphaZero taught itself to play three different games, arstechnica. https://arstechnica.com/science/2018/12/move-over-alphago-alphazero-taught-itself-to-play-three-different-games/. Accessed 3 February 2019.
- 99.
Quellette. Move over AlphaGo: AlphaZero taught itself to play three different games, arstechnica.
- 100.
Pyle and San Jose (2015) An executive’s guide to machine learning. McKinsley Quarterly. https://www.mckinsey.com/industries/high-tech/our-insights/an-executives-guide-to-machine-learning. Accessed 3 February 2019.
- 101.
Dennett (1978) Brainstorms, pp 149–150.
There are however contradictory approaches as well. According Henry Greely, the “…mind is wholly created by or through the state of (the physical) brain” and 2. “the state of the physical brain at time T1 is totally a function of its state at time T0 plus whatever inputs it has received,” therefore 3. “the mind is completely determined.”
Greely (2018), pp. 2303–2315.
- 102.
Minsky (1985).
- 103.
Schneider (2019), p. 7.
- 104.
Kurzweil (1999), pp. 51–62.
- 105.
- 106.
Johnson-Laird (1983), pp. 448–477.
- 107.
McGinn (1991), pp. 202–213.
- 108.
Chalmers (1996), pp. 293–297.
Chalmers. What is it like to be a thermostat? http://consc.net/notes/lloyd-comments.html.
- 109.
Chalmers (1996), pp. 293–297.
- 110.
Frye (2018), pp. 42–44.
- 111.
Chalmers (1995), pp. 200–219.
- 112.
Kurzweil defines it as the ability to use optimally limited resources in furtherance of goals.
Kurzweil. The age of spiritual machines, at p… (What is Artificial Intelligence).
- 113.
Schank (1987) What is AI, anyway? AIMAG Winter, pp. 59–60.
- 114.
Schkolne (2018) Machines demonstrate self-awareness, Medium. https://becominghuman.ai/machines-demonstrate-self-awareness-8bd08ceb1694. Accessed 11 December 2018.
- 115.
Chong (2015) This robot passed a ‘self-awareness’ test that only humans could handle until now. Tech Insider. www.businessinsider.com/this-robot-passed-a-selfawareness-test-that-only-humans-could-handle-until-now-2015-7. Accessed 23 August 2018.
- 116.
Herbert (1985), p. 249.
- 117.
Smith (1998), pp. 277–281.
- 118.
Yanisky-Ravid and Liu. When artificial intelligence systems produce inventions: the 3A era and an alternative model for patent law. https://www.papers.ssrn.com/sol3/papers.cfm?abstract_id=2931828. Accessed 10 July 2018.
Hunter (1990), pp. 12–15.
Copeland (2000) What is artificial intelligence? Alanturing.net. www.alanturing.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html. Accessed 29 June 2018.
Armstrong and Sotala (2012), p. 52.
Galeon and Reedy (2017) Kurzweil claims that the singularity will happen by 2045. Futurism. https://www.futurism.com/kurzweil-claims-that-the-singularity-will-happen-by-2045/. Accessed 29 June 2018.
Davenport and Kirby (2015) Beyond automation. Harvard Business Review. https://hbr.org/2015/06/beyond-automation. Accessed 11 December 2018.
- 119.
Koch (2018) What is consciousness? Scientific American. www.scientificamerican.com/article/what-is-consciousness/. Accessed 22 February 2019.
Tegmark (2017), pp. 428–430.
- 120.
Mason (2015), pp. 1–3.
- 121.
Tegmark (2017), p. 431.
- 122.
Damasio (1994), pp. 247–248.
Oxford Living Dictionaries. https://en.oxforddictionaries.com/definition/sentient.
Sternberg (2003) Wisdom, intelligence, and creativity synthesized.
- 123.
Schneider (2019), pp. 36–37.
It is also framed as the distinction between phenomenal consciousness, the type of consciousness that we, humans have providing us with the internal, self-reflective perception of our perception and functional or cognitive consciousness, which lacks the depth of the former- the “AI zombie” type of consciousness existence.
Ibid, pp. 48–49
- 124.
Ben-Ari et al. (2017), pp. 4–17.
Eden et al. (2012), pp. 28–29.
Marie Del Prado (2015) Stephen Hawking warns of an ‘intelligence explosion. Business Insider. www.businessinsider.com/stephen-hawking-prediction-reddit-ama-intelligent-machines-2015-10. Accessed 29 August 2018.
Ahmed and Glasgow (2012) Swarm intelligence: concepts, models and applications: technical report 2012-585. Queen’s Univ. School of Computing 2. https://ftp.qucis.queensu.ca/TechReports/Reports/2012-585.pdf. Accessed 29 August 2018.
- 125.
Senior (2015) Narrow AI: automating the future of information retrieval. Techcrunch. https://techcrunch.com/2015/01/31/narrow-ai-cant-do-that-or-can-it/. Accessed 29 August 2018.
- 126.
Bostrom (2014), pp. 26, 29, 140, 155.
Hauser. Chinese room argument, internet encyclopedia of philosophy. www.iep.utm.edu/chineser/. Accessed 9 September 2018.
- 127.
Hughes (2004).
Hughes (2013).
Olson (1997).
Olson (2017) Personal identity. In: Zalta EN (ed) The Stanford encyclopedia of philosophy. https://plato.stanford.edu/entries/identity-personal/. Accessed 21 June 2020.
- 128.
Some relevance exists with ANI too. Still regarding ANI applications the issue is of lower significance as the human intervention remains possible.
- 129.
Yudkowsky (2001), p. 13.
- 130.
Yudkowsky (2001), p. 3.
- 131.
Tarleton (2010), p. 1.
- 132.
Omohundro (2008), pp. 483–492.
- 133.
Tegmark (2017) Friendly AI: aligning goals, future of life institute. https://futureoflife.org/2017/08/29/friendly-ai-aligning-goals/. Accessed 24 August 2019.
- 134.
Tegmark (2017).
- 135.
Soares and Fallenstein (2017), p. 10.
- 136.
Bostrom (2014), pp. 100–108.
- 137.
Bostrom (2014), pp. 109–110.
- 138.
Bostrom (2014), pp. 114–119.
- 139.
Bostrom (2014), pp. 114–119.
- 140.
Soares and Fallenstein (2017), p. 10.
- 141.
Hibbard (2001), pp. 13–15.
- 142.
Bostrom (2006), pp. 48–54.
Omohundro (2007) The nature of self-improving artificial intelligence. Paper presented at singularity summit 2007, San Francisco, CA, September 8–9. http://intelligence.org/summit2007/overview/abstracts/#omohundro.
- 143.
Yudkowski (2008), p. 16
- 144.
Yudkowski (2008), p. 42.
- 145.
Donadson (2019) Five principles for citizen-friendly artificial intelligence. The Mandarin. https://www.themandarin.com.au/109017-five-principles-for-citizen-friendly-artificial-intelligence/
- 146.
European Commission. Requirements of trustworthy AI. https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/. Accessed 28 September 2019.
- 147.
They can also contribute in the debate about the legitimacy or not of the development of AGI and ASI, which is conducted below.
- 148.
Muehlhauser and Bostrom (2014), pp. 41–44.
- 149.
Yudkowsky (2001), p. 5.
References
Albus JS (1991) Outline for a theory of intelligence. In: 21 IEEE transactions on systems, man and cybernetics, p 473
Armstrong S, Sotala K (2012) How we’re predicting AI--or failing to. In: Romportl J (ed) Beyond AI: artificial dreams, mach. intelligence research inst. Czech Republic, Pilsen, p 52
Bekey GA (2005) Autonomous robots: from biological inspiration to implementation and control. MIT Press, Cambridge
Ben-Ari D, Frish Y, Lazovski A, Eldan U, Greenbaum D (2017) “Danger, will Robinson”? Artificial intelligence in the practice of law: an analysis and proof of concept experiment. Richmond J Law Technol 23:4–17
Bostrom N (2006) What is a singleton? Linguist Philos Invest 5(2):48–54
Bostrom CSN (2012) How hard is artificial intelligence? Evolutionary arguments and selection effects. J Consciousness Stud 19(7-8):103–130
Bostrom N (2014) Superintelligence paths, dangers, strategies. Oxford University Press, Oxford
Bostrom N (2016) Superintelligence, paths, dangers, strategies. Oxford University Press, Oxford, pp 7–8
Brownstein BJ et al (1983) Technological assessment of future battlefield robotic applications. In: Proceedings of the army conference on application of AI to battlefield information management. US Navy Surface Weapons Center, White Oak, p 169
Busse J, Humm B et al (2015) Actually, what does “ontology” mean? A term coined by philosophy in the light of different scientific disciplines. J Comput Inf Technol 23:29–41
Castelvecchi D (2016) Can we open the black box of AI. Nature 538:22
Chalmers DJ (1995) Facing up to the problem of consciousness. J Consciousness Stud 2(3):200–219
Chalmers D (1996) The conscious mind. In: Search of a final theory. Oxford University Press, Oxford
Chalmers D (2008) The hard problem of consciousness. In: Velmans M, Schneider S (eds) The Blackwell companion to consciousness. Wiley-Blackwell, Hoboken
Chalmers D (2010) The singularity, a philosophical analysis. J Consciousness Stud 17(9-10):7–9
Chandrasekaran B (1990) What kind of information processing is intelligence? In: Partridge D, Wilks Y (eds) The foundations of artificial intelligence. Springer, New York, p 14
Charniak E, McDermott D (1985) Introduction to artificial intelligence. Addison-Wesley, Boston
Cuellar MF (2017) A simpler world? On pruning risks and harvesting fruits in an orchard of whispering algorithms. UC Davis Law Rev 51:27–33
Damasio AR (1994) Descartes’ error: emotion, reason, and the human brain. Avon Books, New York, pp 247–248
Davis E, Marcus G (2015) Commonsense reasoning and commonsense knowledge in artificial intelligence. Commun ACM 58:92–93
Eden A, Steinhart E, Pearce D, Moor J (2012) Chapter I; singularity hypotheses: an overview, introduction: singularity hypotheses: a scientific and philosophical assessment. In: Eden A, Moor J, Soraker J, Steinhart E (eds) Singularity hypotheses, a scientific and philosophical assessment. Springer, New York, pp 28–29
Frye BL (2018) The lion, the bat & the thermostat: metaphors on consciousness. Savannah Law Rev 5:42–44
Gigerenzer G, Todd PM, ABC Research Group (1999) Simple heuristics that make us smart. Oxford University Press, Oxford
Ginsberg ML (1988) Multivalued logics: a uniform approach to reasoning in artificial intelligence. Comput Intell 4:265
Good IJ (1965) Speculations concerning the first ultraintelligent machine. In: Alt F, Rubinoff M (eds) Advances in computers, vol 6. Academic, New York
Greely HT (2018) Neuroscience, artificial intelligence, crispr--and dogs and cats. U C Davis Law Rev 51:2303–2315
Grossmann R (1983) The categorial structure of the world. In: Fundamental issues of artificial intelligence. Springer, New York
Gruber TR (1993) A translation approach to portable ontologies. Knowl Acquis 5(2):199–220
Guihota M, Matthew AF, Suzor NP (2017) Nudging robots: innovative solutions to regulate artificial intelligence. Vanderbilt J Entertainment Technol Law 20:385–394
Herbert N (1985) Quantum reality: beyond the new physics. Anchor Books Editions, New York, p 249
Herre H, et al (2006) General Formal Ontology (GFO): a foundational ontology integrating objects and processes. Technical Report, 8, University of Leipzig
Hibbard B (2001) Super-intelligent machines. ACM SIGGRAPH Comput Graphics 35(1):13–15
Hughes J (2004) Citizen cyborg: why democratic societies must respond to the redesigned human of the future. Westview Press, Cambridge
Hughes J (2013) Transhumanism and personal identity. In: More M, More N (eds) The transhumanist reader. Wiley, Boston
Hunter L (1990) Molecular biology for computer scientists. In: Hunter L (ed) Artificial intelligence and molecular biology. MIT Press, Cambridge, pp 12–15
Hutter M (2010) Universal artificial intelligence: sequential decisions based on algorithmic probability. Springer, New York, pp 125–126
Johnson-Laird PN (1983) Mental models, pp. 448–477
Karnow CEA (2016) The application of traditional tort theory to embodied machine intelligence. In: Calo, Froomkin, Kerr (eds) Robot law. Edward Elgar, Cheltenham, p 53
Khoury AH (2017) Intellectual property rights for “hubots”: on the legal implications of human-like robots as innovators and creators. Cardozo Arts Entertainment Law J 35:635–640
Kurzweil R (1999) The age of spiritual machines, when computers exceed human intelligence. Penguin Books, London, pp 51–62
Kurzweil R (2005) The singularity is near. Viking, New York
Laton D (2016) Manhattan_Project.Exe: a nuclear option for the digital age. Cath Univ J Law Technol 25:94
Marra WC, McNeil SK (2013) Understanding “the loop”: regulating the next generation of war machines. Harv J Law 36:1139–1145
Mason C (2015) Engineering kindness: first steps to building a machine with compassionate intelligence. Int J Synth Emot 6(1):1–3
McCarthy J (2008) The well-designed child. Artif Intell 172:2003–2011
McGinn C (1991) The problem of consciousness: essays towards a resolution, pp. 202–213
Minsky M (1985) The society of mind. Simon and Schuster, New York
Muehlhauser L, Bostrom N (2014) Why we need friendly Ai. Think 13:41–44
Müller VC and Bostrom N (2016) Future progress in artificial intelligence: a survey of expert opinion. Fundamental issues of artificial intelligence, p 553
Nadkarni PM, Ohno-Machado L, Chapman WW (2011) Natural language processing: an introduction. J Am Med Informatics 18:544
Nute D (2011) A logical hole the Chinese room avoids. Mind Mach 21:431–433
Olson E (1997) The human animal: personal identity without psychology. Oxford University Press, Oxford
Omohundro SM (2008) The basic AI drives. In: Wang P, Goertzel B, Franklin S (eds) Artificial general intelligence 2008: 1st AGI conference. Frontiers in artificial intelligence and applications, vol 171. IOS, Amsterdam, pp 483–492
Poole DL, Mackworth AK (2010) Artificial intelligence: foundations of computational agents. Cambridge University Press, Cambridge
Rich E, Knight K (1991) Artificial intelligence. McGraw-Hill, New York
Russell SJ, Norvig P (2010) Artificial intelligence: a modern approach, 3rd edn. Pearson Education Limited, London, pp 2–3
Russell S, Norvig P (2013) Artificial intelligence: a modern approach, 3rd edn. Prentice Hall, Hoboken
Schank RC (1987) What is AI, anyway? AIMAG Winter, pp. 59–60
Scherer MU (2016) Regulating artificial intelligent systems: risks, challenges, competences, and strategies. Harv J Law Technol 29:363–364
Schneider S (2019) Artificial you: AI and the future of your mind. Princeton University Press, Princeton, p 7
Shaffer M (2009) A logical hole in the Chinese room. Mind Mach 19(2):229–235
Shanahan M (2010) Embodiment and the inner life: cognition and consciousness in the space of possible minds. Oxford University Press, New York
Smith JC (1998) Machine intelligence and legal reasoning. Chicago-Kent Law Rev 73:277–281
Soares N, Fallenstein B (2017) Agent foundations for aligning machine intelligence with human interests: a technical research agenda. The technological singularity: managing the journey. Springer, New York, p 10
Suchman L, Weber J (2016) Human-machine autonomies. In: Bhuta N, Beck S, Geib R, Yan Liu H, Kreb C (eds) Autonomous weapon systems: law, ethics, policy. Cambridge University Press, Cambridge, pp 39–40
Tarleton N (2010) Coherent extrapolated volition: a meta-level approach to machine ethics. Miri Machine Intelligence Research Institute, Berkeley, p 1
Tegmark M (2017) Life 3.0, being human in the age of artificial intelligence. Random House, New York, pp 140–141
Tito J (2017) Destination unknown: exploring the impact of artificial intelligence on government. Centre for Public Impact, pp. 7–8
Turing AM (1950) Computing machinery and intelligence. Mind 59:433–460
Uschold M, Gruninger M (1996) Ontologies: principles, methods and applications. Knowl Eng Rev 11:2
Vinge V (1993) The coming technological singularity: how to survive in the post-human era. Whole Earth Review. Winter
Wallach W (2016) The singularity: will we survive our technology Doug Wolens, director I-Maginemedia. Jurimet J 56:297
Weiss T (1990) Closing the Chinese room. Ratio 3:165–181
Winograd T (2006) Thinking machines: can there be? Are we? In: Partridge D, Wilks Y (eds) The foundations of artificial intelligence. Cambridge University Press, Cambridge, p 167
Wisskirchen, et al (2010) Artificial intelligence and robotics and their impact on the workplace, p 10
Yudkowski A (2008) Artificial intelligence as a positive and negative factor in global risk. Machine Intelligence Research Institute, Berkeley, p 16
Yudkowsky E (2001) Creating friendly AI 1.0: the analysis and design of benevolent goal architectures machine intelligence research institute, p 3
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Tzimas, T. (2021). The Ontology of AI. In: Legal and Ethical Challenges of Artificial Intelligence from an International Law Perspective. Law, Governance and Technology Series(), vol 46. Springer, Cham. https://doi.org/10.1007/978-3-030-78585-7_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-78585-7_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-78584-0
Online ISBN: 978-3-030-78585-7
eBook Packages: Law and CriminologyLaw and Criminology (R0)