Abstract
In informationally mature societies, almost all organisations record, generate, process, use, share and disseminate data. In particular, the rise of AI and autonomous systems has corresponded to an improvement in computational power and in solving complex problems. However, the resulting possibilities have been coupled with an upsurge of ethical risks. To avoid the misuse, underuse, and harmful use of data and data-based systems like AI, we should use an ethical framework appropriate to the object of its reasoning. Unfortunately, in recent years, the space for data-related ethics has not been precisely defined in organisations. As a consequence, there has been an overlapping of responsibilities and a void of clear accountabilities. Ethical issues have, therefore, been dealt with using inadequate levels of abstraction (e.g. legal, technical). Yet, if building an ethical infrastructure requires the collaboration of each body, addressing ethical issues related to data requires leaving room for the appropriate level of abstraction. This paper first aims to show how the space of data ethics is already latent in organisations. It then highlights how to redefine roles (chief data ethics officer, data ethics committee, etc.) and codes (code of data ethics) to create and maintain an environment where ethical reasoning about data, information, and AI systems may flourish.
Similar content being viewed by others
Availability of data and material
Not applicable.
Code availability
Not applicable.
Notes
The Data Protection Officer (DPO) is a role established in 2018 by the General Data Protection Regulation (GDPR) in the European Union. This role is supposed to be completely independent and focussed on the protection of personal data in a company, it is, however, common practice especially outside the EU for the Chief Data Officer (CDO) to be in charge of compliance with personal data regulation. For this reason, they are considered as being on the same level in this paper.
To increase the readability of the document, acronyms will be presented on their first occurrence, while key roles will be italicised without the relevant acronym. For graphical reasons, however, the figures will show acronyms with the legend. Exceptions are acronyms commonly used in the corporate environment (SME, CEO, CISO, DPO), which will be presented in their expanded form to facilitate reading for both experts and non-experts.
c-level, also called the c-suite, is a term used to describe high-ranking executive.
In Plato’s Republic, guardians refer to a pool of people who are tasked with the responsibility of protecting the republic from both internal and external threats. They are able to understand true goodness and justice in a way that other people cannot (Republic, book V).
I thank the reviewer for the suggestion to employ a broader category and for the labelling ‘supervisory committees of vulnerable categories’ that I use in this paper.
“the malpractice of choosing, adapting, or revising (“mixing and matching”) ethical principles, guidelines, codes, frameworks or other similar standards (especially but not only in the ethics of AI), from a variety of available offers, in order to retrofit some pre-existing behaviours (choices, processes, strategies, etc.), and hence justify them a posteriori, instead of implementing or improving new behaviours by benchmarking them against public, ethical standards”. (Floridi 2019).
“the malpractice of doing increasingly less “ethical work” (such as fulfilling duties, respecting rights, honouring commitments, etc.) in a given context the lower the return of such ethical work in that context is (mistakenly) perceived to be” (Floridi 2019).
The profile of a Chief Artificial Intelligence Ethics Officer (CAIEO) is not explicitly considered in this article because the algorithmic activity may be managed by the chief data ethics officer. However, in some types of organisation whose goal is to create statistical models for AI, the chief AI ethics officer role could become relevant. Further studies are needed to define the possible interaction between chief data ethics officer and chief AI ethics officer.
for example, it is mandatory under the FH IAAIS framework.
Choice: from French chois, “action of selecting” (c. 1300); “power of choosing” (early 14c.). etymonline.com.
Decision: from Latin decisionem, “act of deciding, settlement, agreement”. etymonline.com.
I will not list the articles here, but there are numerous cases in the press where ethics committees are convened to assess a specific moral situation as a result of a public scandal.
Abbreviations
- AI:
-
Artificial intelligence
- ARC:
-
Algorithm risk committee
- BoK:
-
Body of knowledge
- CDEO:
-
Chief data ethics officer
- CDO:
-
Chief data officer
- CDOC:
-
Children’s data oversight committee
- CEO:
-
Chief executive officer
- CEtO:
-
Chief ethics officer
- CIO:
-
Chief information officer
- CISO:
-
Chief information security officer
- CoDE:
-
Code of data ethics
- CoE:
-
Code of ethics
- CSC:
-
Cyber security committee
- DCC:
-
Data control committee
- DEC:
-
Data ethics committee
- DPO:
-
Data protection officer
- DW:
-
Data workers
- EC:
-
Ethics committee
- EU:
-
European Union
- GDPR:
-
General Data Protection Regulation
- IAAIS:
-
Independent Audits of Artificial Intelligence Systems (ForHumanity)
- Infraethics:
-
Infrastructure of ethics
- LoA:
-
Level of abstraction
- SME:
-
Small- and medium-sized enterprises
References
Adey P (2008) Airports, mobility and the calculative architecture of affective control. Geoforum 39(1):438–451. https://doi.org/10.1016/j.geoforum.2007.09.001
ALSTOM (2020) Ethics and compliance committee internal rules
Annas G, Grodin M (2016) Hospital ethics committees, consultants, and courts. AMA J Ethics 18(5):554–559. https://doi.org/10.1001/journalofethics.2016.18.5.sect1-1605
Ariely D (2008) Predictably irrational: The hidden forces that shape our decisions. Harper Perennial
Aristotle (1985) Nicomachean ethics (Irwin T, Ed.). Hackett Publishing
Ashri R (2022) Building AI software: Data-driven vs model-driven AI and why we need an AI-specific software…,. Hackernoon
California Consumer Privacy Act (CCPA) (2019)
Carrier R (2021) The rise of the ethics committee.
Cassirer E (1953) Substance and Function, and Einstein’s Theory of Relativity (reprint of). Dover Public. https://doi.org/10.1093/oso/9780190933784.003.0011
Clausen S, Brünker F (2022) The impact of signaling commitment to ethical ai on organizational attractiveness. Wirtschaftsinformatik 2022 Proceedings - Track 7: Digital Business Models & Entrepreneurship
Da Costa DCT, De Neufville R (2012) Designing efficient taxi pickup operations at airports. Transp Res Rec 2300:91–99. https://doi.org/10.3141/2300-11
Dastin J (2018) Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://doi.org/10.1201/9781003278290-44
Devillers L (2021) Human-robot interactions and affective computing: The ethical implications. In Robotics, AI, and Humanity. Springer, pp. 205–211
European Commission (2016) Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJEU L119, 04/05/2016. In Official Journal of the European Union. http://eur-lex.europa.eu/pri/en/oj/dat/2003/l_285/l_28520031101en00330037.pdf
European Commission (2021) Proposal for a regulation of the European parliament and of the council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206
Flick C (2016) Informed consent and the Facebook emotional manipulation study. Res Ethics. https://doi.org/10.1177/1747016115599568
Floridi L (2010) Information. A very short introduction. Oxford University Press
Floridi L (2011) The Philosophy of Information (UK). Oxford University Press
Floridi L (2013a) Distributed morality in an information society. Sci Eng Ethics 19(3):727–743. https://doi.org/10.1007/s11948-012-9413-4
Floridi L (2013b) The Ethics of Information. Oxford University Press
Floridi L (2014) The Fourth Revolution. How the Infopshere is Reshaping Human Reality. Oxford University Press
Floridi L (2016) Hyperhistory the emergence of the MASs and the design of infraethics. In: Hildebrandt M, van den Berg B (eds) Information, Freedom and Property: The Philosophy of Law Meets the Philosophy of Technology. Routledge, pp 153–172
Floridi L (2018) Soft ethics and the governance of the digital. Phil Technol 31(1):1–8. https://doi.org/10.1007/s13347-018-0303-9
Floridi L (2019) Translating principles into practices of digital ethics: five risks of being unethical. Phil Stud Phil Technol 32:185–193. https://doi.org/10.1007/s13347-019-00354-x
Floridi L (2022) The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford University Press
Floridi L, Cowls J (2019) A unified framework of five principles for AI in society. Harvard Data Sci Rev 1:1–15. https://doi.org/10.1162/99608f92.8cd550d1
Floridi L, Sanders JW (2005) Internet Ethics: The Constructionist Values of Homo Poieticus. In Cavalier R (Ed.), The Impact of the Internet on Our Moral Lives. State University of New York Press
Floridi L, Taddeo M (2016) What is data ethics? Phil Trans R Soc A 374(2083):1–5. https://doi.org/10.1098/rsta.2016.0360
Goffi ER, Colin L, Belouali S (2021) Ethical Assessment of AI Cannot Ignore Cultural Pluralism: A Call for Broader Perspective on AI Ethic. Arribat Int J Hum Rights 1(2):151–175
Hagendorff T (2020) The ethics of AI ethics: an evaluation of guidelines. Mind Mach 30(1):99–120. https://doi.org/10.1007/s11023-020-09517-8
Hepburn RW (1984) ‘Wonder’ and Other Essays: Eight Studies in Aesthetics and Neighbouring Fields. University Press, Edinburgh
ICO (2020) Age appropriate design: a code of practice for online services. https://ico.org.uk/for-organisations/guide-to-data-protection/ico-codes-of-practice/age-appropriate-design-code
Jere MS, Farnan T, Koushanfar F (2021) A taxonomy of attacks on federated learning. IEEE Secur Priv 19(2):20–28. https://doi.org/10.1109/MSEC.2020.3039941
Jernigan C, Mistree BFT (2009) Gaydar: Facebook friendships expose sexual orientation. First Monday. https://doi.org/10.5210/fm.v14i10.2611
Jobin A, Ienca M, Vayena E (2019) The global landscape of AI ethics guidelines. Nat Mach Intellig 1(9):389–399. https://doi.org/10.1038/s42256-019-0088-2
Johnson RL, Pistilli G, Menédez-González N, Denisse L, Duran D, Panai E, Kalpokiene J, Bertulfo DJ (2022) The Ghost in the Machine has an American accent: value conflict in GPT-3. PREPRINT
Jurkiewicz CL (2018) Big data, big concerns: ethics in the digital age. Integrity 20(sup1):S46–S59. https://doi.org/10.1080/10999922.2018.1448218
Kahneman D (2011) Thinking, Fast and Slow. Penguin Press
Kaplan B (2015) Selling health data: de-identification, privacy, and speech. Camb Q Healthc Ethics 24(3):256–271. https://doi.org/10.1017/S0963180114000589
Kloppers HJ (2013) Driving corporate social responsibility (CSR) through the companies act: an overview of the role of the social and ethics committee. Potchefstroom Electron Law J 16(1):165–199. https://doi.org/10.4314/pelj.v16i1.6
Krafft T, Hauer M, Hustedt C, Fetic L (2020) From principle to practice: an interdisciplinary framework to operationalise AI ethics. In VDE and Bertelsmann Stiftung. https://doi.org/10.4324/9781003028215-11
La Porte JM, Narbona J (2021) Colloquy with Luciano Floridi on the anthropological effects of the digital revolution. Church Commun Cul 6(1):119–138. https://doi.org/10.1080/23753234.2021.1885984
Light R, Panai E (2022) The self-synchronisation of AI ethical principles. DISO 1:24. https://doi.org/10.1007/s44206-022-00023-1
Marche S (2021) The Chatbot Problem
McCarthy J, Minsky ML, Rochester N, Shannon CE (2006) A proposal for the dartmouth summer research (August 31, 1955). AI Mag 27(4):12–14
Mittelstadt BD, Allo P, Taddeo M, Wachter S, Floridi L (2016) The ethics of algorithms: Mapping the debate. Big Data Soc 3(2):1–21. https://doi.org/10.1177/2053951716679679
Mökander J, Floridi L (2021) Ethics-based auditing to develop trustworthy AI. Mind Mach 31(2):323–327. https://doi.org/10.1007/s11023-021-09557-8
OCHA (2020) Guidance Note for data responsibility in humanitarian action. https://reliefweb.int/report/world/centre-humanitarian-data-guidance-note-series-data-responsibility-humanitarian-action-5
ODI (2019) Data ethics canvas. In the open data institute. https://theodi.org/article/the-data-ethics-canvas-2021/#1624955096642-4c6671b7-cee2
Perrault R, Shoham Y, Brynjolfsson E, Clark J, Etchemendy J, Grosz Harvard B, Lyons T, Manyika J, Carlos Niebles J, Mishra S (2019) Artificial Intelligence Index Report 2019
Schwartz B (2004) The paradox of choice: why more is less. Harper Perennial
Stahl BC (2021. Ethical Issues of AI. In artificial intelligence for a better future: an ecosystem perspective on the ethics of AI and emerging digital technologies (pp. 35–53). Springer. https://doi.org/10.1007/978-3-030-69978-9_4
Tarnoff B, Weigel M (2020) Voices from the Valley: Tech Workers Talk About What They Do and How They Do It. FSG Originals
Tsamados A, Aggarwal N, Cowls J, Morley J, Roberts H, Taddeo M, Floridi L (2022) The ethics of algorithms: key problems and solutions. AI Soc 37(1):215–230. https://doi.org/10.1007/s00146-021-01154-8
Turing AM (1950) Computing machinery and intelligence. Mach Intellig Perspect Comput Model 59:433–460
United Nations (1989) The Convention on the rights of the child. https://www.ohchr.org/en/instruments-mechanisms/instruments/convention-rights-child
Funding
Not applicable.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author declares that he has no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Enrico Panai: Fellow of ForHumanity (https://forhumanity.center).
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Panai, E. The latent space of data ethics. AI & Soc (2023). https://doi.org/10.1007/s00146-023-01757-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00146-023-01757-3