Skip to main content

Part of the book series: Law, Governance and Technology Series ((ISDP,volume 46))

  • 1033 Accesses

Abstract

This chapter analyzes some aspects of the ontology of AI. It starts with AI definitions, it moves onto the main categories of weak and strong AI, then to the machine-learning procedure, to the puzzling issue of consciousness and last but not least to the quest for friendly AI. The scope of the chapter is to emphasize on the constituents of AI which give us an insight about its potential evolution, its autonomy and unpredictability. In this sense, the scope of the chapter is not to give a complete analysis of the ontology of AI but an analysis of these elements, which prove why legal regulation is necessary.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The Editors of Encyclopaedia Britannica, Methodic doubt, Philosophy. https://www.britannica.com/topic/methodic-doubt. Accessed 14 June 2020.

  2. 2.

    Grossmann (1983).

  3. 3.

    Busse et al. (2015), pp. 29–41.

  4. 4.

    Gruber (1993), pp. 199–220.

  5. 5.

    Uschold and Gruninger (1996).

  6. 6.

    Herre et al. (2006).

  7. 7.

    Pickert. Einf¨uhrungin Ontologien. http://www.dbis.informatik.hu-berlin.de/dbisold/lehre/WS0203/SemWeb/artikel/2/PickertOntologienfinal.pdf

  8. 8.

    Busse et al. (2015), p. 29.

  9. 9.

    They are also presented in the introduction.

  10. 10.

    Chandrasekaran (1990), p. 14.

  11. 11.

    Russell and Norvig (2013), p. 2.

    Winograd (2006), p. 167.

  12. 12.

    Russell and Norvig (2013).

  13. 13.

    Poole and Mackworth (2010).

  14. 14.

    Rich and Knight (1991).

  15. 15.

    Charniak and McDermott (1985).

  16. 16.

    Noyes. 5 things you need to know about A.I.: Cognitive, neural and deep, oh my! Computerworld (Mar. 3, 2016, 12:49 pm). http://www.computerworld.com/article/3040563/enterprise-applications/5-things-you-need-toknow-about-ai-cognitive-neural-anddeep-oh-my.html; http://perma.cc/7PW9-P42G. Accessed 25 June 2018.

  17. 17.

    Scherer (2016), pp. 363–364.

  18. 18.

    Scherer (2016), p. 360.

  19. 19.

    Omohundro (2008).

  20. 20.

    Russell and Norvig (2010), pp. 2–3.

  21. 21.

    Brownstein et al. (1983), p. 169.

  22. 22.

    Laton (2016), p. 94.

  23. 23.

    Russell and Norvig (2010), p. 3.

  24. 24.

    McCarthy (2007) What is artificial intelligence? STAN. http://www-formal.stanford.edu/jmc/whatisai/whatisai.html.

  25. 25.

    McCarthy and Stanford University Formal Reasoning Group (2007) What is artificial intelligence|basic questions. Formal Reasoning Group. http://www-formal.stanford.edu/jmc/whatisai/node1.html.

  26. 26.

    Legg and Hutter (2007) A collection of definitions of intelligence. arXiv:0706.3639 [Cs]. http://arxiv.org/abs/0706.3639.

  27. 27.

    Legg and Hutter. A collection of definitions of intelligence, at p. 9

  28. 28.

    Albus (1991), p. 473.

  29. 29.

    Hern. What is the Turing test? And are we all doomed now? The Guardian. https://www.theguardian.com/technology/2014/jun/09/what-is-the-alan-turing-test. Accessed 18 June 2020.

  30. 30.

    “Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese…

    Cole. The Chinese room argument. In: Zalta EN (ed) The Stanford encyclopedia of philosophy, Spring 2020 ed. https://plato.stanford.edu/archives/spr2020/entries/chinese-room/.

  31. 31.

    Cole. The Chinese room argument.

  32. 32.

    Cole. The Chinese room argument.

    Also see: Shaffer (2009), pp. 229–235; Nute (2011), pp. 431–433; Weiss (1990), pp. 165–181.

  33. 33.

    Russell and Norvig (2010).

  34. 34.

    Wisskirchen et al. (2010), p. 10.

  35. 35.

    Urban (2015) The AI revolution: the road to superintelligence. Wait but why. https://www.waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html. Accessed 28 June 2018.

  36. 36.

    The applications are ever-expanding in fact: “speech and language recognition of the Siri virtual assistant on the Apple iPhone…interpreting video feeds from drones carrying out visual inspections of infrastructure such as oil pipelines, organizing personal and business calendars, responding to simple customer-service queries, co-ordinating with other intelligent systems to carry out tasks like booking a hotel at a suitable time and location, helping radiologists to spot potential tumors in X-rays, flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices…”

    Heath (2018) What is AI? Everything you need to know about artificial intelligence. ZDNet. https://www.zdnet.com/article/what-is-ai-everything-you-need-to-know-about-artificial-intelligence. Accessed 1 February 2019.

  37. 37.

    Bekey (2005).

  38. 38.

    Heath. What is AI? Everything you need to know about artificial intelligence.

  39. 39.

    Goertzel and Pennachin (2007) Artificial general intelligence vi.

  40. 40.

    Bostrom (2014), p. 23

  41. 41.

    Khoury (2017), pp. 635–640.

    Ginsberg (1988), p. 265.

  42. 42.

    Gigerenzer et al. (1999).

  43. 43.

    Tal (2018) Forecast|How the first artificial general intelligence will change society: future of artificial intelligence P2. Quantumrun special series. https://www.quantumrun.com/prediction/first-artificial-general-intelligence-society-future. Accessed 3 February 2019.

  44. 44.

    What is still missing is the “raw computing power” but there are several other ways that the gap in these regards could close.

    Snyder-Beattie and Dewey (2014) Explainer: what is superintelligence? The conversation. https://theconversation.com/explainer-what-is-superintelligence-29175. Accessed 1 February 2019.

  45. 45.

    Ibid.

  46. 46.

    Moravec (1976) The role of raw power in intelligence. www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html. Accessed 14 July 2019.

  47. 47.

    Bostrom (2012), pp. 103–130.

  48. 48.

    Bostrom (2014).

  49. 49.

    Bostrom. How long before superintelligence? Oxford Future of Humanity Institute, Faculty of Philosophy & Oxford Martin School, University of Oxford. https://nickbostrom.com/superintelligence.html. Accessed 02 February 2019.

  50. 50.

    Bostrom (2014), pp. 40–60.

  51. 51.

    Shanahan (2010).

  52. 52.

    Bostrom (2014), pp. 40–60.

  53. 53.

    Good (1965).

  54. 54.

    Hawking et al. Transcending complacency on superintelligent machines. Huffpost. https://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html?ec_carp=3359844804041712164. Accessed 1 February 2019.

  55. 55.

    Vinge (1993) The coming technological singularity: how to survive in the post-human era. In: Vision-21: interdisciplinary science and engineering in the era of cyberspace, vol. 11, pp 12–14. https://perma.cc/6UY3-C2RJ. Accessed 10 February 2019.

  56. 56.

    Guihota et al. (2017), pp. 385–394.

  57. 57.

    Yudkowsky (1996) Staring at the singularity. http://yudkowsky.net/obsolete/singularity.html.

  58. 58.

    Vinge (1993).

    Kurzweil (2005).

    Yudkowsky (2007) Three major singularity schools. http://yudkowsky.net/singularity/schools.

  59. 59.

    Chalmers (2010), pp. 7–9.

  60. 60.

    Vinge (1993) The coming technological singularity: how to survive in the post-human era. https://edoras.sdsu.edu/vinge/misc/singularity.html. Accessed 1 August 2019.

  61. 61.

    Wallach (2016), p. 297.

  62. 62.

    Yudkowsky. Three major singularity schools. http://yudkowsky.net/singularity/schools/. Accessed 31 July 2019.

  63. 63.

    McCarthy (2008), pp. 2003–2011.

    Lake et al (2016) Building machines that learn and think like people. Center for brains, minds, and machines memo No. 046, at 7. http://www.mit.edu/tomeru/papers/machines_that_think.pdf.

  64. 64.

    Turing (1950), pp. 433–460.

  65. 65.

    Faggella. What is machine learning? Emerj. https://emerj.com/ai-glossary-terms/what-is-machine-learning/. Accessed 2 February 2019.

  66. 66.

    Copeland (2016) What’s the difference between artificial intelligence, machine learning, and deep learning? nvidia. https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/. Accessed 2 February 2019.

  67. 67.

    Kowert. The foreseeability of human-artificial intelligence interactions, p. 183.

    Tanz (2016) Soon we won’t program computers. We’ll train them like dogs. WIRED. https://www.wired.com/2016/05/the-end-of-code/. Accessed 24 June 2018.

    Scherer (2016), p. 365.

    Cuellar (2017), pp. 27–33.

  68. 68.

    Schuller. At the crossroads of control: the intersection of artificial intelligence in autonomous weapon systems with International Humanitarian Law, at p. 396.

  69. 69.

    Khoury (2017), pp. 635–640

  70. 70.

    Tito (2017), pp. 7–8.

  71. 71.

    UNIDIR. Autonomous weapon systems: implications of increasing autonomy in the critical functions of weapons - ref. 4283-ebook, Dr. Ludovic Righetti, Max Planck Institute for Intelligent Systems, Germany, Emerging technology and future autonomous weapons, at p. 37.

  72. 72.

    Davis and Marcus (2015), pp. 92–93.

  73. 73.

    “The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040–2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter”.

    Müller and Bostrom (2016), p. 553.

    Machine learning: what it is and why it matters. SAS Institute. http://www.sas.com/en_us/insights/analytics/machine-learning.html. Archived at https://perma.cc/X5VD-4WPW.

  74. 74.

    Karnow (2016), p. 53.

  75. 75.

    Marra and McNeil (2013), pp. 1139–1145.

    Schuller. At the crossroads of control: the intersection of artificial intelligence in autonomous weapon systems with International Humanitarian Law, p. 404.

  76. 76.

    Copeland. What’s the difference between artificial intelligence, machine learning, and deep learning?

  77. 77.

    De Spiegeleire et al. Artificial intelligence and the future of defense, p. 41.

  78. 78.

    Nadkarni et al. (2011), p. 544.

    Liddy (2001) Natural language processing, surface (Syracuse Univ. Research Facility and Collaborative Env’t). http://surface.syr.edu/cgi/viewcontent.cgi?article=1043&context=istpub.

  79. 79.

    “An artificial neural network transforms input data by applying a nonlinear function to a weighted sum of the inputs. The transformation is known as a neural layer and the function is referred to as a neural unit. The intermediate outputs of one layer, called features, are used as the input into the next layer. The neural network through repeated transformations learns multiple layers of nonlinear features (like edges and shapes), which it then combines in a final layer to create a prediction (of more complex objects). The neural net learns by varying the weights or parameters of a network so as to minimize the difference between the predictions of the neural network and the desired values. This phase where the artificial neural network learns from the data is called training.”

    Nvidia. Artificial Neural Networks. https://developer.nvidia.com/discover/artificial-neural-network. Accessed 02 February 2019.

  80. 80.

    Artificial Intelligence Blog. DL algorithms: deep belief networks (DBN). https://www.artificial-intelligence.blog/education/dl-algorithms-deep-belief-networks-dbn. Accessed 28 April 2019.

  81. 81.

    Bostrom (2016), pp. 7–8.

  82. 82.

    Bostrom (2016), p. 8.

  83. 83.

    World Economic Forum (2019) White paper, AI governance: a holistic approach to implement ethics into AI, p. 6. https://www.weforum.org/whitepapers/ai-governance-a-holistic-approach-to-implement-ethics-into-ai. Accessed 18 October 2019.

  84. 84.

    Castelvecchi (2016), p. 22.

    UK Government Office for Science (2015) Artificial intelligence: opportunities and implications for the future of decision making, p 5. Available at https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/566075/gs-16-19-artificial-intelligenceai-report.pdf.

  85. 85.

    Bostrom (2014), p. 9.

  86. 86.

    Bostrom (2014), p. 9.

  87. 87.

    Bostrom (2014), p. 10.

  88. 88.

    Bostrom (2014), p. 29.

  89. 89.

    inFERENce (2015) The two kinds of uncertainty an AI agent has to represent. https://www.inference.vc/the-two-kinds-of-uncertainties-in-reinforcement-learning-2/. Accessed 06 July 2019.

  90. 90.

    inFERENce (2015) The two kinds of uncertainty an AI agent has to represent.

  91. 91.

    Bostrom (2014), p. 10.

  92. 92.

    Thomason R. Logic and artificial intelligence. Stanford encyclopedia of philosophy. https://perma.cc/3RPH-PVKV. Accessed 29 August 2018.

  93. 93.

    Thomason. Logic and artificial intelligence.

  94. 94.

    Hutter (2010), pp. 125–126.

    Hallevy (2018) Dangerous robots – artificial intelligence vs. human intelligence. Available at: https://ssrn.com/abstract=3121905. Accessed 19 July 2018.

  95. 95.

    Yanisky-Ravid and Liu. When artificial intelligence systems produce inventions: the 3A era and an alternative model for patent law, p. 7.

    Camett and Heinz (2006) John Koza built an invention machine. Popular Science. www.popsci.com/scitech/article/2006-04/john-koza-has-built-invention-machine. Accessed 16 September 2018.

    Hallevy. Dangerous robots – artificial intelligence vs. human intelligence, p. 6.

    Suchman and Weber (2016), pp. 39–40.

  96. 96.

    Big data: what it is and why it matters. SAS INSTITUTE. http://www.sas.com/en_us/insights/big-data/what-is-bigdata.html. Accessed 26 September 2016.

    Hilbert M and Lopez P (2011) The world’s technological capacity to store, communicate, and compute information: tracking the global capacity of 60 analog and digital technologies during the period from 1986 to 2007. martinhilbert.net. http://www.martinhilbert.net/WorldInfoCapacity.html. Accessed 26 September 2016.

  97. 97.

    Tegmark (2017), pp. 140–141.

  98. 98.

    Quellette (2018) Move over AlphaGo: AlphaZero taught itself to play three different games, arstechnica. https://arstechnica.com/science/2018/12/move-over-alphago-alphazero-taught-itself-to-play-three-different-games/. Accessed 3 February 2019.

  99. 99.

    Quellette. Move over AlphaGo: AlphaZero taught itself to play three different games, arstechnica.

  100. 100.

    Pyle and San Jose (2015) An executive’s guide to machine learning. McKinsley Quarterly. https://www.mckinsey.com/industries/high-tech/our-insights/an-executives-guide-to-machine-learning. Accessed 3 February 2019.

  101. 101.

    Dennett (1978) Brainstorms, pp 149–150.

    There are however contradictory approaches as well. According Henry Greely, the “…mind is wholly created by or through the state of (the physical) brain” and 2. “the state of the physical brain at time T1 is totally a function of its state at time T0 plus whatever inputs it has received,” therefore 3. “the mind is completely determined.”

    Greely (2018), pp. 2303–2315.

  102. 102.

    Minsky (1985).

  103. 103.

    Schneider (2019), p. 7.

  104. 104.

    Kurzweil (1999), pp. 51–62.

  105. 105.

    Chalmers has come forth with this “hard” and potentially un-answered problem.

    See: Chalmers (1996).

    Chalmers (2008).

  106. 106.

    Johnson-Laird (1983), pp. 448–477.

  107. 107.

    McGinn (1991), pp. 202–213.

  108. 108.

    Chalmers (1996), pp. 293–297.

    Chalmers. What is it like to be a thermostat? http://consc.net/notes/lloyd-comments.html.

  109. 109.

    Chalmers (1996), pp. 293–297.

  110. 110.

    Frye (2018), pp. 42–44.

  111. 111.

    Chalmers (1995), pp. 200–219.

  112. 112.

    Kurzweil defines it as the ability to use optimally limited resources in furtherance of goals.

    Kurzweil. The age of spiritual machines, at p… (What is Artificial Intelligence).

  113. 113.

    Schank (1987) What is AI, anyway? AIMAG Winter, pp. 59–60.

  114. 114.

    Schkolne (2018) Machines demonstrate self-awareness, Medium. https://becominghuman.ai/machines-demonstrate-self-awareness-8bd08ceb1694. Accessed 11 December 2018.

  115. 115.

    Chong (2015) This robot passed a ‘self-awareness’ test that only humans could handle until now. Tech Insider. www.businessinsider.com/this-robot-passed-a-selfawareness-test-that-only-humans-could-handle-until-now-2015-7. Accessed 23 August 2018.

  116. 116.

    Herbert (1985), p. 249.

  117. 117.

    Smith (1998), pp. 277–281.

  118. 118.

    Yanisky-Ravid and Liu. When artificial intelligence systems produce inventions: the 3A era and an alternative model for patent law. https://www.papers.ssrn.com/sol3/papers.cfm?abstract_id=2931828. Accessed 10 July 2018.

    Hunter (1990), pp. 12–15.

    Copeland (2000) What is artificial intelligence? Alanturing.net. www.alanturing.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html. Accessed 29 June 2018.

    Armstrong and Sotala (2012), p. 52.

    Galeon and Reedy (2017) Kurzweil claims that the singularity will happen by 2045. Futurism. https://www.futurism.com/kurzweil-claims-that-the-singularity-will-happen-by-2045/. Accessed 29 June 2018.

    Davenport and Kirby (2015) Beyond automation. Harvard Business Review. https://hbr.org/2015/06/beyond-automation. Accessed 11 December 2018.

  119. 119.

    Koch (2018) What is consciousness? Scientific American. www.scientificamerican.com/article/what-is-consciousness/. Accessed 22 February 2019.

    Tegmark (2017), pp. 428–430.

  120. 120.

    Mason (2015), pp. 1–3.

  121. 121.

    Tegmark (2017), p. 431.

  122. 122.

    Damasio (1994), pp. 247–248.

    Oxford Living Dictionaries. https://en.oxforddictionaries.com/definition/sentient.

    Sternberg (2003) Wisdom, intelligence, and creativity synthesized.

  123. 123.

    Schneider (2019), pp. 36–37.

    It is also framed as the distinction between phenomenal consciousness, the type of consciousness that we, humans have providing us with the internal, self-reflective perception of our perception and functional or cognitive consciousness, which lacks the depth of the former- the “AI zombie” type of consciousness existence.

    Ibid, pp. 48–49

  124. 124.

    Ben-Ari et al. (2017), pp. 4–17.

    Eden et al. (2012), pp. 28–29.

    Marie Del Prado (2015) Stephen Hawking warns of an ‘intelligence explosion. Business Insider. www.businessinsider.com/stephen-hawking-prediction-reddit-ama-intelligent-machines-2015-10. Accessed 29 August 2018.

    Ahmed and Glasgow (2012) Swarm intelligence: concepts, models and applications: technical report 2012-585. Queen’s Univ. School of Computing 2. https://ftp.qucis.queensu.ca/TechReports/Reports/2012-585.pdf. Accessed 29 August 2018.

  125. 125.

    Senior (2015) Narrow AI: automating the future of information retrieval. Techcrunch. https://techcrunch.com/2015/01/31/narrow-ai-cant-do-that-or-can-it/. Accessed 29 August 2018.

  126. 126.

    Bostrom (2014), pp. 26, 29, 140, 155.

    Hauser. Chinese room argument, internet encyclopedia of philosophy. www.iep.utm.edu/chineser/. Accessed 9 September 2018.

  127. 127.

    Hughes (2004).

    Hughes (2013).

    Olson (1997).

    Olson (2017) Personal identity. In: Zalta EN (ed) The Stanford encyclopedia of philosophy. https://plato.stanford.edu/entries/identity-personal/. Accessed 21 June 2020.

  128. 128.

    Some relevance exists with ANI too. Still regarding ANI applications the issue is of lower significance as the human intervention remains possible.

  129. 129.

    Yudkowsky (2001), p. 13.

  130. 130.

    Yudkowsky (2001), p. 3.

  131. 131.

    Tarleton (2010), p. 1.

  132. 132.

    Omohundro (2008), pp. 483–492.

  133. 133.

    Tegmark (2017) Friendly AI: aligning goals, future of life institute. https://futureoflife.org/2017/08/29/friendly-ai-aligning-goals/. Accessed 24 August 2019.

  134. 134.

    Tegmark (2017).

  135. 135.

    Soares and Fallenstein (2017), p. 10.

  136. 136.

    Bostrom (2014), pp. 100–108.

  137. 137.

    Bostrom (2014), pp. 109–110.

  138. 138.

    Bostrom (2014), pp. 114–119.

  139. 139.

    Bostrom (2014), pp. 114–119.

  140. 140.

    Soares and Fallenstein (2017), p. 10.

  141. 141.

    Hibbard (2001), pp. 13–15.

  142. 142.

    Bostrom (2006), pp. 48–54.

    Omohundro (2007) The nature of self-improving artificial intelligence. Paper presented at singularity summit 2007, San Francisco, CA, September 8–9. http://intelligence.org/summit2007/overview/abstracts/#omohundro.

  143. 143.

    Yudkowski (2008), p. 16

  144. 144.

    Yudkowski (2008), p. 42.

  145. 145.

    Donadson (2019) Five principles for citizen-friendly artificial intelligence. The Mandarin. https://www.themandarin.com.au/109017-five-principles-for-citizen-friendly-artificial-intelligence/

  146. 146.

    European Commission. Requirements of trustworthy AI. https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/. Accessed 28 September 2019.

  147. 147.

    They can also contribute in the debate about the legitimacy or not of the development of AGI and ASI, which is conducted below.

  148. 148.

    Muehlhauser and Bostrom (2014), pp. 41–44.

  149. 149.

    Yudkowsky (2001), p. 5.

References

  • Albus JS (1991) Outline for a theory of intelligence. In: 21 IEEE transactions on systems, man and cybernetics, p 473

    Google Scholar 

  • Armstrong S, Sotala K (2012) How we’re predicting AI--or failing to. In: Romportl J (ed) Beyond AI: artificial dreams, mach. intelligence research inst. Czech Republic, Pilsen, p 52

    Google Scholar 

  • Bekey GA (2005) Autonomous robots: from biological inspiration to implementation and control. MIT Press, Cambridge

    Google Scholar 

  • Ben-Ari D, Frish Y, Lazovski A, Eldan U, Greenbaum D (2017) “Danger, will Robinson”? Artificial intelligence in the practice of law: an analysis and proof of concept experiment. Richmond J Law Technol 23:4–17

    Google Scholar 

  • Bostrom N (2006) What is a singleton? Linguist Philos Invest 5(2):48–54

    Google Scholar 

  • Bostrom CSN (2012) How hard is artificial intelligence? Evolutionary arguments and selection effects. J Consciousness Stud 19(7-8):103–130

    Google Scholar 

  • Bostrom N (2014) Superintelligence paths, dangers, strategies. Oxford University Press, Oxford

    Google Scholar 

  • Bostrom N (2016) Superintelligence, paths, dangers, strategies. Oxford University Press, Oxford, pp 7–8

    Google Scholar 

  • Brownstein BJ et al (1983) Technological assessment of future battlefield robotic applications. In: Proceedings of the army conference on application of AI to battlefield information management. US Navy Surface Weapons Center, White Oak, p 169

    Google Scholar 

  • Busse J, Humm B et al (2015) Actually, what does “ontology” mean? A term coined by philosophy in the light of different scientific disciplines. J Comput Inf Technol 23:29–41

    Article  Google Scholar 

  • Castelvecchi D (2016) Can we open the black box of AI. Nature 538:22

    Article  Google Scholar 

  • Chalmers DJ (1995) Facing up to the problem of consciousness. J Consciousness Stud 2(3):200–219

    Google Scholar 

  • Chalmers D (1996) The conscious mind. In: Search of a final theory. Oxford University Press, Oxford

    Google Scholar 

  • Chalmers D (2008) The hard problem of consciousness. In: Velmans M, Schneider S (eds) The Blackwell companion to consciousness. Wiley-Blackwell, Hoboken

    Google Scholar 

  • Chalmers D (2010) The singularity, a philosophical analysis. J Consciousness Stud 17(9-10):7–9

    Google Scholar 

  • Chandrasekaran B (1990) What kind of information processing is intelligence? In: Partridge D, Wilks Y (eds) The foundations of artificial intelligence. Springer, New York, p 14

    Chapter  Google Scholar 

  • Charniak E, McDermott D (1985) Introduction to artificial intelligence. Addison-Wesley, Boston

    Google Scholar 

  • Cuellar MF (2017) A simpler world? On pruning risks and harvesting fruits in an orchard of whispering algorithms. UC Davis Law Rev 51:27–33

    Google Scholar 

  • Damasio AR (1994) Descartes’ error: emotion, reason, and the human brain. Avon Books, New York, pp 247–248

    Google Scholar 

  • Davis E, Marcus G (2015) Commonsense reasoning and commonsense knowledge in artificial intelligence. Commun ACM 58:92–93

    Article  Google Scholar 

  • Eden A, Steinhart E, Pearce D, Moor J (2012) Chapter I; singularity hypotheses: an overview, introduction: singularity hypotheses: a scientific and philosophical assessment. In: Eden A, Moor J, Soraker J, Steinhart E (eds) Singularity hypotheses, a scientific and philosophical assessment. Springer, New York, pp 28–29

    Chapter  Google Scholar 

  • Frye BL (2018) The lion, the bat & the thermostat: metaphors on consciousness. Savannah Law Rev 5:42–44

    Google Scholar 

  • Gigerenzer G, Todd PM, ABC Research Group (1999) Simple heuristics that make us smart. Oxford University Press, Oxford

    Google Scholar 

  • Ginsberg ML (1988) Multivalued logics: a uniform approach to reasoning in artificial intelligence. Comput Intell 4:265

    Article  Google Scholar 

  • Good IJ (1965) Speculations concerning the first ultraintelligent machine. In: Alt F, Rubinoff M (eds) Advances in computers, vol 6. Academic, New York

    Google Scholar 

  • Greely HT (2018) Neuroscience, artificial intelligence, crispr--and dogs and cats. U C Davis Law Rev 51:2303–2315

    Google Scholar 

  • Grossmann R (1983) The categorial structure of the world. In: Fundamental issues of artificial intelligence. Springer, New York

    Google Scholar 

  • Gruber TR (1993) A translation approach to portable ontologies. Knowl Acquis 5(2):199–220

    Article  Google Scholar 

  • Guihota M, Matthew AF, Suzor NP (2017) Nudging robots: innovative solutions to regulate artificial intelligence. Vanderbilt J Entertainment Technol Law 20:385–394

    Google Scholar 

  • Herbert N (1985) Quantum reality: beyond the new physics. Anchor Books Editions, New York, p 249

    Google Scholar 

  • Herre H, et al (2006) General Formal Ontology (GFO): a foundational ontology integrating objects and processes. Technical Report, 8, University of Leipzig

    Google Scholar 

  • Hibbard B (2001) Super-intelligent machines. ACM SIGGRAPH Comput Graphics 35(1):13–15

    Article  Google Scholar 

  • Hughes J (2004) Citizen cyborg: why democratic societies must respond to the redesigned human of the future. Westview Press, Cambridge

    Google Scholar 

  • Hughes J (2013) Transhumanism and personal identity. In: More M, More N (eds) The transhumanist reader. Wiley, Boston

    Google Scholar 

  • Hunter L (1990) Molecular biology for computer scientists. In: Hunter L (ed) Artificial intelligence and molecular biology. MIT Press, Cambridge, pp 12–15

    Google Scholar 

  • Hutter M (2010) Universal artificial intelligence: sequential decisions based on algorithmic probability. Springer, New York, pp 125–126

    Google Scholar 

  • Johnson-Laird PN (1983) Mental models, pp. 448–477

    Google Scholar 

  • Karnow CEA (2016) The application of traditional tort theory to embodied machine intelligence. In: Calo, Froomkin, Kerr (eds) Robot law. Edward Elgar, Cheltenham, p 53

    Google Scholar 

  • Khoury AH (2017) Intellectual property rights for “hubots”: on the legal implications of human-like robots as innovators and creators. Cardozo Arts Entertainment Law J 35:635–640

    Google Scholar 

  • Kurzweil R (1999) The age of spiritual machines, when computers exceed human intelligence. Penguin Books, London, pp 51–62

    Google Scholar 

  • Kurzweil R (2005) The singularity is near. Viking, New York

    Google Scholar 

  • Laton D (2016) Manhattan_Project.Exe: a nuclear option for the digital age. Cath Univ J Law Technol 25:94

    Google Scholar 

  • Marra WC, McNeil SK (2013) Understanding “the loop”: regulating the next generation of war machines. Harv J Law 36:1139–1145

    Google Scholar 

  • Mason C (2015) Engineering kindness: first steps to building a machine with compassionate intelligence. Int J Synth Emot 6(1):1–3

    Article  Google Scholar 

  • McCarthy J (2008) The well-designed child. Artif Intell 172:2003–2011

    Article  Google Scholar 

  • McGinn C (1991) The problem of consciousness: essays towards a resolution, pp. 202–213

    Google Scholar 

  • Minsky M (1985) The society of mind. Simon and Schuster, New York

    Google Scholar 

  • Muehlhauser L, Bostrom N (2014) Why we need friendly Ai. Think 13:41–44

    Article  Google Scholar 

  • Müller VC and Bostrom N (2016) Future progress in artificial intelligence: a survey of expert opinion. Fundamental issues of artificial intelligence, p 553

    Google Scholar 

  • Nadkarni PM, Ohno-Machado L, Chapman WW (2011) Natural language processing: an introduction. J Am Med Informatics 18:544

    Article  Google Scholar 

  • Nute D (2011) A logical hole the Chinese room avoids. Mind Mach 21:431–433

    Article  Google Scholar 

  • Olson E (1997) The human animal: personal identity without psychology. Oxford University Press, Oxford

    Google Scholar 

  • Omohundro SM (2008) The basic AI drives. In: Wang P, Goertzel B, Franklin S (eds) Artificial general intelligence 2008: 1st AGI conference. Frontiers in artificial intelligence and applications, vol 171. IOS, Amsterdam, pp 483–492

    Google Scholar 

  • Poole DL, Mackworth AK (2010) Artificial intelligence: foundations of computational agents. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Rich E, Knight K (1991) Artificial intelligence. McGraw-Hill, New York

    Google Scholar 

  • Russell SJ, Norvig P (2010) Artificial intelligence: a modern approach, 3rd edn. Pearson Education Limited, London, pp 2–3

    Google Scholar 

  • Russell S, Norvig P (2013) Artificial intelligence: a modern approach, 3rd edn. Prentice Hall, Hoboken

    Google Scholar 

  • Schank RC (1987) What is AI, anyway? AIMAG Winter, pp. 59–60

    Google Scholar 

  • Scherer MU (2016) Regulating artificial intelligent systems: risks, challenges, competences, and strategies. Harv J Law Technol 29:363–364

    Google Scholar 

  • Schneider S (2019) Artificial you: AI and the future of your mind. Princeton University Press, Princeton, p 7

    Book  Google Scholar 

  • Shaffer M (2009) A logical hole in the Chinese room. Mind Mach 19(2):229–235

    Article  Google Scholar 

  • Shanahan M (2010) Embodiment and the inner life: cognition and consciousness in the space of possible minds. Oxford University Press, New York

    Book  Google Scholar 

  • Smith JC (1998) Machine intelligence and legal reasoning. Chicago-Kent Law Rev 73:277–281

    Google Scholar 

  • Soares N, Fallenstein B (2017) Agent foundations for aligning machine intelligence with human interests: a technical research agenda. The technological singularity: managing the journey. Springer, New York, p 10

    Google Scholar 

  • Suchman L, Weber J (2016) Human-machine autonomies. In: Bhuta N, Beck S, Geib R, Yan Liu H, Kreb C (eds) Autonomous weapon systems: law, ethics, policy. Cambridge University Press, Cambridge, pp 39–40

    Google Scholar 

  • Tarleton N (2010) Coherent extrapolated volition: a meta-level approach to machine ethics. Miri Machine Intelligence Research Institute, Berkeley, p 1

    Google Scholar 

  • Tegmark M (2017) Life 3.0, being human in the age of artificial intelligence. Random House, New York, pp 140–141

    Google Scholar 

  • Tito J (2017) Destination unknown: exploring the impact of artificial intelligence on government. Centre for Public Impact, pp. 7–8

    Google Scholar 

  • Turing AM (1950) Computing machinery and intelligence. Mind 59:433–460

    Article  Google Scholar 

  • Uschold M, Gruninger M (1996) Ontologies: principles, methods and applications. Knowl Eng Rev 11:2

    Article  Google Scholar 

  • Vinge V (1993) The coming technological singularity: how to survive in the post-human era. Whole Earth Review. Winter

    Google Scholar 

  • Wallach W (2016) The singularity: will we survive our technology Doug Wolens, director I-Maginemedia. Jurimet J 56:297

    Google Scholar 

  • Weiss T (1990) Closing the Chinese room. Ratio 3:165–181

    Article  Google Scholar 

  • Winograd T (2006) Thinking machines: can there be? Are we? In: Partridge D, Wilks Y (eds) The foundations of artificial intelligence. Cambridge University Press, Cambridge, p 167

    Google Scholar 

  • Wisskirchen, et al (2010) Artificial intelligence and robotics and their impact on the workplace, p 10

    Google Scholar 

  • Yudkowski A (2008) Artificial intelligence as a positive and negative factor in global risk. Machine Intelligence Research Institute, Berkeley, p 16

    Google Scholar 

  • Yudkowsky E (2001) Creating friendly AI 1.0: the analysis and design of benevolent goal architectures machine intelligence research institute, p 3

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Tzimas, T. (2021). The Ontology of AI. In: Legal and Ethical Challenges of Artificial Intelligence from an International Law Perspective. Law, Governance and Technology Series(), vol 46. Springer, Cham. https://doi.org/10.1007/978-3-030-78585-7_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-78585-7_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-78584-0

  • Online ISBN: 978-3-030-78585-7

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics