Skip to main content

Investigating the Ontology of AI vis-à-vis Technical Artefacts

  • Chapter
  • First Online:
AI, Consciousness and The New Humanism
  • 83 Accesses

Abstract

Artificial intelligence is the new technological buzzword. Everything from camera apps on your mobile phone to medical diagnosis algorithms to expert systems are now claiming to be ‘AI’, and many more facets of our lives are being colonized by the application of AI/ML systems (henceforth, ‘AI’). But what does this entail to designers, users and to society at large? Most of the philosophical discourse in this context has focused on the analysis and clarification of the epistemological claims of intelligence within AI and on the moral implications of AI. Philosophical critiques of the plausibility of artificial intelligence do not have much to say about the real-world repercussions of introducing AI systems into every possible domain in the name of automation and efficiency; similarly, most of the moral misgivings about AI have to do with conceiving them as autonomous agents beyond the control of human actors. These discussions have clarified the debate surrounding AI to a great extent; however, certain crucial questions remain outside the ambit of these debates. Arguments in support of AI often take advantage of this void by emphasizing that AI systems are no different than previously existing ‘unintelligent’ technologies, thereby implying that the economic, existential, and ethical threats posed by these systems are either similar in kind as those posed by any other technology or grossly misplaced and exaggerated. In this chapter, we shall think through this assumption by investigating into the ontology of AI systems vis-à-vis ordinary (non-AI) technical artefacts to see wherein lies the distinction between the two. I shall examine how contemporary ontologies of technical artefacts (e.g., intentionalist and non-intentionalist theories of function) apply to AI. Hence, clarifying the ontology of AI is crucial to understand their normative and moral significance and the implications therefrom.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    They do acknowledge, however, that their desiderata for an adequate theory of artefact functions are contextual and that other contexts might be accounted for by alternative desiderata.

  2. 2.

    Preston (2003) offers a critical examination of the four desiderata of Vermaas and Houkes. She questions the plausibility of any theory to adequately address all four without contradiction and argues for giving up D4. In this paper, the four desiderata will be considered as a heuristic for any theory of technical functions. See also Kroes (2012: 76) for a discussion on and extension of these desiderata.

  3. 3.

    Interestingly, recent work in efficient hardware design for AI systems is bringing to light analog computing as having an advantage over its digital counterpart in enabling faster and more effective execution of machine learning algorithms. This clearly highlights the significance of the structural basis of AI systems.

  4. 4.

    John Searle, on the other hand, generalizes this collective-intentionality-based theory of function assignment to all artefacts—social as well as technical.

  5. 5.

    He echoes Norbert Wiener’s pronouncement that “[i]n the past, a partial and inadequate view of human purpose has been relatively innocuous only because it has been accompanied by technical limitations.... This is only one of the many places where human impotence has shielded us from the full destructive impact of human folly” (Russell, 2019: 137).

  6. 6.

    It must be noted at the outset that a similar issue arises in non-AI technical artefacts too, and this goes by the name of ‘unintended consequences’. The former is quite distinct from the latter in that whereas the latter has to do with a break between designer intentions and use context, the former has to do with designer intentions and technical function itself. It is in this sense that the alignment problem can be said to be unique to AI systems, although these systems may also be prey to unintended consequences arising out of unforeseeable use contexts.

References

  • Christian, B. (2021). The alignment problem: Machine learning and human values. W.W. Norton & Company.

    Google Scholar 

  • Hilpinen, R. (2018). Artifact. In Stanford encyclopedia of philosophy (Summer). https://plato.stanford.edu/archives/sum2018/entries/artifact/

  • Kroes, P. (2012). Technical artefacts: Creations of mind and matter. Springer.

    Book  Google Scholar 

  • Preston, B. (2003). Of Marigold beer: A reply to Vermaas and Houkes. The British Journal for the Philosophy of Science, 54(4), 601–612.

    Article  Google Scholar 

  • Preston, B. (2009a). Philosophical theories of artifact function. In A. Meijers (Ed.), Philosophy of technology and engineering sciences (Vol. 9). Elsevier.

    Google Scholar 

  • Preston, B. (2013). A philosophy of material culture: Action, function, and mind. Routledge.

    Book  Google Scholar 

  • Preston, B. (2022). Artifact. In Stanford encyclopedia of philosophy. https://plato.stanford.edu/archives/win2022/entries/artifact/

  • Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. Viking.

    Google Scholar 

  • Russell, S. J., & Norvig, P. (2003). Artificial intelligence: A modern approach. Pearson.

    Google Scholar 

  • Vermaas, P. E., & Houkes, W. (2003). Ascribing functions to technical artefacts: A challenge to etiological accounts of functions. The British Journal for the Philosophy of Science, 54(2), 261–289.

    Article  Google Scholar 

  • Vermaas, P. E., & Houkes, W. (2010). Technical functions: On the use and design of artefacts. Springer.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ashwin Jayanti .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Jayanti, A. (2024). Investigating the Ontology of AI vis-à-vis Technical Artefacts. In: Menon, S., Todariya, S., Agerwala, T. (eds) AI, Consciousness and The New Humanism. Springer, Singapore. https://doi.org/10.1007/978-981-97-0503-0_17

Download citation

Publish with us

Policies and ethics