Skip to main content

AI Ethics - Critical Reflections on Embedding Ethical Frameworks in AI Technology

  • Conference paper
  • First Online:
Culture and Computing. Design Thinking and Cultural Computing (HCII 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12795))

Included in the following conference series:

Abstract

Embedding ethical frameworks in artificial intelligence (AI) technologies has been a popular topic for academic research for the past decade [1,2,3,4,5,6,7]. The approaches of the studies differ in how AI technology, ethics, role of technical artefacts and socio-technical aspects of AI are perceived. In addition, most studies define insufficiently what the connection between the process of embedding ethical frameworks to AI technology and the larger framework of AI ethics is. These deficiencies have caused that the concept of AI ethics and the construct of embedding ethical parameters into AI are used in an ambiguous, rather than in a complementary manner.

One reason for the ambiguity within this field of research is due to a lack of a comprehensive conceptual framework for AI ethics in general. I intend to fill this void by grounding AI ethics as a subfield of philosophy of technology and applied ethics and presenting its main issues of study by examining recognized spheres of activities through the method of levels of abstraction [8]. I put forward an initial hierarchical conceptual framework for AI ethics as an outcome. After this, I discuss the connection between the process of embedding ethical frameworks in AI and the larger AI ethics framework, leading to presenting basic requirements for the sphere of activity hereafter known as embedded ethics.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Even though Floridi and Taddeo focus on the information sphere, they examine it as phenomena emerging from the combination of data, algorithms and hardware and software applications [10].

  2. 2.

    Floridi and Taddeo most likely understand this risk, since they propose that practices related to responsible research and innovation should be considered when examining important practices related to data ethics.

  3. 3.

    The goals are always defined by human operators.

  4. 4.

    This is the case for Moral Machines (2008), which I will elaborate in the third chapter of this paper.

  5. 5.

    The research community has a responsibility to explicitly elaborate what their research considers. Not stating what the current development phase of technology is and not informing when one’s paper considers theoretically possible, but unlikely scenarios, researchers legitimize pseudo problems, such as the closeness of singularity.

  6. 6.

    Human activities always take place in social contexts, which are shaped by varying cultural, historical and political backgrounds [2, 24].

  7. 7.

    The design phase is about bringing abstract ideas to exist in the real world, but to understand it only as a phase of a product’s lifecycle is not enough for applied ethics.

  8. 8.

    Technical artifacts consist of the artifact (tangible or intangible) and its use-plan[30,31,31]. This definition shows how the artefacts are always a means to instantiate human intentions.

  9. 9.

    By wider, I mean that the concept of sociotechnical stretches to refer to relations outside of mere user(s) - technical artefact(s) relation.

  10. 10.

    Even in the situation that the few people have good intentions, the strong narrative that technology development is a morally neutral activity [13, 14], and the fact that a few people cannot in any shape perceive the needs and desires of large populations water down the possibility of accepting that kind of power to a small group of people [13, 23, 35]. Therefore, enabling meaningful societal discourse is one of the corner stones of (AI) deployment ethics [12, 32].

  11. 11.

    It is important to notice that actors with different roles require different level of intelligibility [2, 28]. For example, operators of an AI system require understandable information explaining states relevant for the used basic functions, whereas engineers taking part in the AI systems redesign require explanations that reach the underlying phases of information processing behind the functions.

  12. 12.

    A typed variable is a variable that can hold only certain explicitly stated data. Data in this case can refer to either symbols of empirical perception or symbols related to purely conceptual theories [8].

  13. 13.

    There are two types of information that are possible to depict by the levels of abstraction method: analogous and discrete. Analogous information refers to information used in natural sciences to depict the basis of natural phenomena. In analogous information the observables can take infinite number of values and their behavior is described with differential equations. The other type of information is discrete, meaning that the observables have a finite number of values they can take [8]. This research considers discrete information.

  14. 14.

    Surjective means that an abstract observation can be traced back to at least one concrete counterpart. Its strict meaning would allow only a single concrete counterpart per abstract observation but as Floridi points out, abstract information in the field of humanities is often traced back as a connection between several concrete counterparts.

  15. 15.

    AI4people is an Atomium European Institution for Science, Media, and Democracy (EISMD) initiative which pursues to produce frameworks for a good AI society. For more information see https://www.eismd.eu/ai4people/.

  16. 16.

    Opportunities can turn to missed opportunities if AI is underused for the sake of misleading argumentation.

  17. 17.

    Some aspects of the governance ethics LoA may be perceived to best serve its meaning if they were regulated nationally or by an intergovernmental covenant [40], but that is a whole other discourse and out of the scope of this article.

  18. 18.

    It would be more accurate to talk about prejudice discrimination, since discrimination in its broad meaning refers to distinguishing groups of information from a mass of data. Therefore, discrimination in its broad meaning is a non-separational function of AI.

  19. 19.

    Compare to the concept of meaningful human control as a HCI grand challenge [46].

References

  1. European Commission: Ethics guidelines for trustworthy AI. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

  2. The IEEE global initiative on ethics of autonomous and intelligent systems: ethically aligned design: a vision for prioritizing human well-being with autonomous and intelligent systems. First Edition. IEEE (2019)

    Google Scholar 

  3. van de Poel, I.: Embedding values in artificial intelligence (AI) systems. Mind. Mach. 30(3), 385–409 (2020). https://doi.org/10.1007/s11023-020-09537-4

  4. Gillespie, T.: Systems Engineering for Ethical Autonomous Systems. SciTech Publishing, London (2019). ISBN-13: 978-1-78561-372-2

    Google Scholar 

  5. Arkin R.: EMBEDDED ETHICS - “governing lethal behavior: embedding ethics in a hybrid deliberative/reactive robot architecture. In: Proceedings of the 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI), Amsterdam, Netherlands, 12–15 March 2008, pp. 121–128. IEEE (2008)

    Google Scholar 

  6. Wallach, W., Allen, C.: Moral Machines: Teaching Robots Right from Wrong. Oxford University Press, New York (2008)

    Google Scholar 

  7. Anderson, M., Anderson, S.L. (eds.): Machine Ethics. Cambridge University Press, New York (2011)

    Google Scholar 

  8. Floridi, L.: The method of levels of abstraction. Mind. Mach. 18, 303–329 (2008). https://doi.org/10.1007/s11023-008-9113-7

    Article  Google Scholar 

  9. Johnson, D.G., Miller, K.W.: Un-making artificial moral agents. Ethics Inf. Technol. 10(2), 123–133 (2008). https://doi.org/10.1007/s10676-008-9174-6

    Article  Google Scholar 

  10. Floridi, L., Taddeo, M.: What is data ethics? Philos. Trans. R. Soc. A: Math. Phys. Eng. Sci. 374(2083), 20160360 (2016). https://doi.org/10.1098/rsta.2016.0360

    Article  Google Scholar 

  11. Hollnagel, E., Woods, D.D.: Joint Cognitive Systems – Foundations of Cognitive Systems Engineering. CRC Press/Taylor & Francis Group, London (2005). ISBN-13: 978-0-367-86420-0

    Google Scholar 

  12. Schomberg, R.V. (Ed.): Towards responsible research and innovation in the information and communication technologies and security technologies fields. Publication Office of the European Union, Luxembourg (2011). http://ec.europa.eu/research/science-society/document_library/pdf_06/mep-rapport-2011_en.pdf

  13. Jasanoff, S.: Future imperfect: science, technology and the imaginations of modernity. In: Jasanoff, S., Kim, S. (eds.): Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power. The University of Chicago Press, London (2015)

    Google Scholar 

  14. Floridi, L.: Soft ethics and the governance of the digital. Philos. Technol. 31(1), 1–8 (2018). https://doi.org/10.1007/s13347-018-0303-9

    Article  MathSciNet  Google Scholar 

  15. Cabinet Office of Japan: Society 5.0. https://www8.cao.go.jp/cstp/english/society5_0/index.html. Accessed 12 Mar 2021

  16. Samoili, S., Lopez, C.M., Gomez, G.E., De Prato, G., Martinez-Plumed, F., Delipetrev, B.: AI WATCH. Defining Artificial Intelligence. EUR 30117 EN. Publications Office of the European Union, Luxembourg (2020). ISBN 978-92-76-17045-7. https://doi.org/10.2760/382730

  17. Minsky, M.L.: Computation: Finite and Infinite Machines. Prentice-Hall, Englewood Cliffs (1967)

    MATH  Google Scholar 

  18. Norvig, P., Russel, S.: Artificial Intelligence – A Modern Approach, 3rd edn. Pearson, Boston (2010)

    Google Scholar 

  19. Pietikäinen M., Silven, O.: Tekoälyn haasteet: koneoppimisesta ja konenäöstä tunnetekoälyyn. Oulun Yliopisto, Oulu (2019). ISBN 978-952-62-2482-4

    Google Scholar 

  20. Bostrom, N., Yudkowsky E.: The ethics of artificial intelligence. In: Frankish, K., Ramsey, W. (eds.) The Cambridge Handbook of Artificial Intelligence, pp. 316–334. Cambridge University Press, Cambridge (2014). https://doi.org/10.1017/CBO9781139046855.020

  21. Kostopoulos, L.: Decoupling Human Characteristics from Algorithmic Capabilities. The IEEE Standards Association (2014)

    Google Scholar 

  22. Beauchamp, T., Childress, J.: Principles of Biomedical Ethics, 7th edn. Oxford University Press, New York (2013)

    Google Scholar 

  23. Hansson, S.O.: Theories and methods for the ethics of technology. In: Hansson, S.O. (ed.) The Ethics of Technology. Rowman & Littlefield, London (2017). ISBN 978-1-7834-8658-8.

    Google Scholar 

  24. Hallamaa, J.: Yhdessä toimimisen etiikka [Ethics of acting together]. Gaudeamus, Helsinki (2017)

    Google Scholar 

  25. Westermarck, E.: The Origin and Development of the Moral Ideas, vol. 2. Macmillan, London (1908)

    Google Scholar 

  26. Habermas, J.: The theory of communicative action. In: Reason and the Rationalization of Society, vol. 1. Heinemann, London (1984)

    Google Scholar 

  27. Velasquez, M., Andre, C., Shanks, T., Meyer, M.: What is Ethics? Markkula center for Applied ethics (2019). https://www.scu.edu/ethics/ethics-resources/ethical-decision-making/what-is-ethics/. Accessed 12 Mar 2021

  28. Floridi, L., et al.: AI4People—An ethical framework for a good ai society: opportunities, risks, principles, and recommendations. Mind. Mach. 28(4), 689–707 (2018). https://doi.org/10.1007/s11023-018-9482-5

    Article  Google Scholar 

  29. Hallamaa, J., Snell, K.: Ethics in AI research – what and how? Finnish Center for Artificial Intelligence (2020). https://fcai.fi/eab-blog/2020/9/4/ethics-in-ai-research-what-and-how. Accessed 12 Mar 2021

  30. Jones, D., Gregor, S.: The anatomy of a design theory. J. Assoc. Inf. Syst. 8(5), 312–335 (2007)

    Google Scholar 

  31. Simon, H.A.: The Sciences of the Artificial. MIT, Cambridge (1970)

    Google Scholar 

  32. Saariluoma, P., Cañas, J., Leikas, J.: Designing for Life. MacMillan, London (2016)

    Book  Google Scholar 

  33. Franssen, M., Gert-Jan L., van de Poel, I.: Philosophy of technology. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy (Fall 2018 Edition) (2018). https://plato.stanford.edu/archives/fall2018/entries/technology/

  34. Saariluoma, P., Oulasvirta, A.: User psychology: re-assessing the boundaries of a discipline. Sci. Res. 1(5), 317–328 (2010)

    Google Scholar 

  35. Homepage of Black in AI. https://blackinai.github.io/. Accessed 12 Mar 2021

  36. European Commission: White Paper on Artificial Intelligence: a European approach to excellence and trust. Brussels (2020)

    Google Scholar 

  37. ETAIROS -project homepage. https://etairos.fi/en/front-page/. Accessed 12 Mar 2021

  38. AIGA -project homepage. https://des.utu.fi/projects/aiga/. Accessed 12 Dec 2021

  39. Canca, C.: AI & Global Governance: Human Rights and AI Ethics – Why Ethics Cannot be Replaced by the UDHR. United Nations University, Center for Policy Research (2019)

    Google Scholar 

  40. Ben-Israel, I., et al.: Towards regulation of AI systems. Council of Europe (2020)

    Google Scholar 

  41. Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. Proc. Mach. Learn. Res. 81, 1–15 (2018)

    Google Scholar 

  42. Vakkuri, V., Kemell, K.-K., Abrahamsson, P.: Implementing ethics in AI: initial results of an industrial multiple case study. In: Franch, X., Männistö, T., Martínez-Fernández, S. (eds.) PROFES 2019. LNCS, vol. 11915, pp. 331–338. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-35333-9_24

    Chapter  Google Scholar 

  43. Ruff, H., Narayanan, S., Draper, M.: Human interaction with levels of automation and decision-aid fidelity in the supervisory control of multiple simulated unmanned air vehicles. Pres. Teleoper. Virtual Environ. 11(4), 335–351 (2002)

    Article  Google Scholar 

  44. Berk, R., Hyatt, J.: Machine learning forecasts of risk to inform sentencing decisions. Fed. Sentencing Report. 27(4), 222–228 (2015). https://doi.org/10.1525/fsr.2015.27.4.222

    Article  Google Scholar 

  45. Norman, D.: The Design of Everyday Things, Revised and expanded edition. Basic Books (AZ) (2013). ISBN 9780262525671

    Google Scholar 

  46. Chairs, C.S., Salvendy, G., et al.: Seven HCI grand challenges. Int. J. Human-Comput. Interact. 35(14), 1229–1269 (2019). https://doi.org/10.1080/10447318.2019.1619259

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Henrikki Salo-Pöntinen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Salo-Pöntinen, H. (2021). AI Ethics - Critical Reflections on Embedding Ethical Frameworks in AI Technology. In: Rauterberg, M. (eds) Culture and Computing. Design Thinking and Cultural Computing. HCII 2021. Lecture Notes in Computer Science(), vol 12795. Springer, Cham. https://doi.org/10.1007/978-3-030-77431-8_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-77431-8_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-77430-1

  • Online ISBN: 978-3-030-77431-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics