Abstract
Despite the increase in the research field of ethics in artificial intelligence, most efforts have focused on the debate about principles and guidelines for responsible AI, but not enough attention has been given to the “how” of applied ethics. This paper aims to advance the research exploring the gap between practice and principles in AI ethics by identifying how companies are applying those guidelines and principles in practice. Through a qualitative methodology based on 22 semi-structured interviews and two focus groups, the goal of the current study is to understand how companies approach ethical issues related to AI systems. A structured analysis of the transcripts brought out many actual practices and findings, which are presented around the following main research topics: ethics and principles, privacy, explainability, and fairness. The interviewees also raised issues of accountability and governance. Finally, some recommendations are suggested such as developing specific sector regulations, fostering a data-driven organisational culture, considering the algorithm’s complete life cycle, developing and using a specific code of ethics, and providing specific training on ethical issues. Despite some obvious limitations, such as the type and number of companies interviewed, this work identifies real examples and direct priorities to advance the research exploring the gap between practice and principles in AI ethics, with a specific focus on Spanish companies.
Similar content being viewed by others
Notes
See Dastin (2018).
See Confessore (2018).
For an updated list of principles, please check https://algorithmwatch.org/en/project/ai-ethics-guidelines-global-inventory/.
For example, https://mapa.estrategiaia.es/mapa, that refers to an updated list of organisations related to AI in Spain.
The number after “Quote” refers to the company ID (Table 3) from which the actual quote has been drawn.
References
Abrams M, Abrams J, Cullen P, Goldstein L (2019) Artificial intelligence, ethics, and enhanced data stewardship. IEEE Secur Priv 17(2):17–30. https://doi.org/10.1109/MSEC.2018.2888778
Bietti E (2020) From ethics washing to ethics bashing: a view on tech ethics from within moral philosophy. In: FAT* 2020—proceedings of the 2020 conference on fairness, accountability, and transparency, pp 210–219. https://doi.org/10.1145/3351095.3372860
Braun V, Clarke V (2006) Using thematic analysis in psychology. Qual Res Psychol 3(2):77–101. https://doi.org/10.1191/1478088706QP063OA
Brundage M, Avin S, Wang J, Belfield H, Krueger G, Hadfield G, Khlaaf H, Yang J, Toner H, Fong R, Maharaj T, Koh PW, Hooker S, Leung J, Trask A, Bluemke E, Lebensold J, O’Keefe C, Koren M, et al. (2020) Toward trustworthy AI development: mechanisms for supporting verifiable claims. http://arxiv.org/abs/2004.07213
Bryant A, Charmaz K (eds) (2007) The Sage handbook of grounded theory. Sage
Canca C (2020) Operationalising AI ethics principles. In: Communications of the ACM, vol. 63, issue 12. https://doi.org/10.1145/3430368
Charisi V, Dennis L, Fisher M, Lieck R, Matthias A, Slavkovik M, Sombetzki J, Winfield AFT, Yampolskiy R (2017) Towards moral autonomous systems. http://arxiv.org/abs/1703.04741
Confessore N (2018) Cambridge analytica and facebook: the scandal and the fallout thus far. NY Times.https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html. 4 Apr 2018
Creswell JW (2012) Qualitative inquiry and research design: choosing among five approaches. SAGE, Los Angeles
Daly A, Hagendorff T, Li H, Mann M, Marda V, Wagner B, Wang WW (2020) AI, Governance and ethics: global perspectives. SSRN Electron J. https://doi.org/10.2139/ssrn.3684406
Dastin J (2018) Amazon scraps secret AI recruiting tool that showed bias against women. Reuters, London, UK. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G. Accessed 1 Mar 2021
Data Ethics (2021) Data ethics readiness test. https://dataethics.eu/home/dataethics-readiness-test-2021/. Accessed 1 Mar 2021
de Bruin B, Floridi L (2017) The ethics of cloud computing. Sci Eng Ethics 23:21–39. https://doi.org/10.1007/s11948-016-9759-0
Edwards L, Veale M, Welbl J, van KM, Binns R, Lane G, Henderson T (2018) Slave to the algorithm? Why a “right to an explanation” is probably not the remedy you are looking for (Vol. 16). https://perma.cc/PJX2-XT7X
Eitel-Porter R (2020) Beyond the promise: implementing ethical AI. AI Ethics 1(1):73–80. https://doi.org/10.1007/S43681-020-00011-6
European Commission (2019) Ethics guidelines for trustworthy AI. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai. Accessed 1 Mar 2021
Fernow J, de Miguel Beriain I, Brey P, Stahl B (2019) Setting future ethical standards for ICT, Big Data, AI and robotics. ORBIT J 2019(1):1–8. https://doi.org/10.29297/ORBIT.V2019I1.115
Fjeld J, Achten N, Hilligoss H, Nagy A, Srikumar M (2020) Principled artificial intelligence: mapping consensus in ethical and rights-based approaches to principles for AI. SSRN Electron J. https://doi.org/10.2139/SSRN.3518482
Floridi L (2019) Translating principles into practices of digital ethics: five risks of being unethical. Philosophy and technology, vol 32. Springer Netherlands, pp 185–193. https://doi.org/10.1007/s13347-019-00354-x
Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F, Schafer B, Valcke P, Vayena E (2018) AI4People—an ethical framework for a good AI Society: opportunities, risks, principles, and recommendations. Mind Mach 28(4):689–707. https://doi.org/10.1007/s11023-018-9482-5
Gebru T, Morgenstern J, Vecchione B, Vaughan JW, Wallach H, Daumé III H, Crawford K (2018) Datasheets for datasets. https://arxiv.org/abs/1803.09010v7
Gonzalez Fabre R, Camacho Ibáñez J, Tejedor Escobar P (2021) Moral control and ownership in AI systems. AI & Soc 36:289–303 https://doi.org/10.1007/s00146-020-01020-z
Govia L (2020) Coproduction, ethics and artificial intelligence. J Dig Soc Res. https://doi.org/10.33621/jdsr.v2i3.53
Greene D, Hoffmann AL, Stark L (2019) Better, nicer, clearer, fairer: a critical assessment of the movement for ethical artificial intelligence and machine learning. In: Proceedings of the 52nd Hawaii international conference on system sciences
Hagendorff T (2020) The ethics of AI ethics: an evaluation of guidelines. Mind Mach 30(1):99–120. https://doi.org/10.1007/s11023-020-09517-8
Hagerty A, Rubinov I (2019) Global AI ethics: a review of the social impacts and ethical implications of artificial intelligence. https://arxiv.org/abs/1907.07892v1
Hennink MM, Kaiser BN, Marconi VC (2016) Code saturation versus meaning saturation: how many interviews are enough? Qualitative Health Research 27(4):591–608. https://doi.org/10.1177/1049732316665344
Hickok M (2021) Lessons learned from AI ethics principles for future actions. AI Ethics 1(1):41–47. https://doi.org/10.1007/S43681-020-00008-1
Jobin A, Ienca M, Vayena E (2019) The global landscape of AI ethics guidelines. Nat Mach Intell 1(9):389–399
Kevin M, Ana FI (2019) Case study—customer relation management, smart information systems and ethics. ORBIT J 2(2):1–24. https://doi.org/10.29297/ORBIT.V2I2.114
Kitchin R (2016) Thinking critically about and researching algorithms. Inf Commun Soc 20(1):14–29. https://doi.org/10.1080/1369118X.2016.1154087
Krefting L (1991) Rigor in qualitative research: the assessment of trustworthiness. Am J Occup Ther 45(3):214–222. https://doi.org/10.5014/AJOT.45.3.214
Krueger RA, Casey MA (2000) Focus groups—a practical guide for applied research, 3rd edn. SAGE Publications Inc., USA
Mark R, Anya G (2019) Ethics of using smart city AI and Big Data: the case of four large European Cities. ORBIT J 2(2):1–36. https://doi.org/10.29297/ORBIT.V2I2.110
Marshall C, Rossman GB (1995) Designing qualitative research, 2nd edn. Sage Publications, Thousand Oaks
Mason M (2010) Sample size and saturation in PhD studies using qualitative interviews. In: Forum qualitative Sozialforschung/Forum: qualitative social research, vol 11, no 3
McDuie-Ra D, Gulson K (2020) The backroads of AI: the uneven geographies of artificial intelligence and development. Area 52(3):626–633. https://doi.org/10.1111/AREA.12602
McNamara A, Smith J, Murphy-Hill E (2018) Does ACM’s code of ethics change ethical decision making in software development? In: ESEC/FSE 2018: proceedings of the 2018 26th ACM joint meeting on European software engineering conference and symposium on the foundations of software engineering. https://doi.org/10.1145/3236024.3264833
Mittelstadt B (2019) Principles alone cannot guarantee ethical AI. Nat Mach Intell 1(11):501–507. https://doi.org/10.1038/s42256-019-0114-4
Mittelstadt B, Russell C, Wachter S (2019) Explaining explanations in AI. In: FAT* 2019—proceedings of the 2019 conference on fairness, accountability, and transparency, pp 279–288. https://doi.org/10.1145/3287560.3287574
Morgan DL (1997) Focus groups as qualitative research. SAGE, New York
Morley J, Floridi L, Kinsey L, Elhalal A (2020) From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Sci Eng Ethics 26(4):2141–2168. https://doi.org/10.1007/s11948-019-00165-5
Morley J, Elhalal A, Garcia F, Kinsey L, Mökander J, Floridi L (2021) Ethics as a service: a pragmatic operationalisation of AI ethics. Minds Mach 31(2):239–256. https://doi.org/10.1007/S11023-021-09563-W
Orr W, Davis JL (2020) Attributions of ethical responsibility by artificial intelligence practitioners. Inf Commun Soc 23(5):719–735. https://doi.org/10.1080/1369118X.2020.1713842
Rothenberger L, Fabian B, Arunov E (2019) Relevance of ethical guidelines for artificial intelligence—a survey and evaluation. In: Progress papers. https://aisel.aisnet.org/ecis2019_rip/26
Stahl BC, Wright D (2018) Ethics and privacy in AI and Big Data: implementing responsible research and innovation. IEEE Secur Priv 16(3):26–33. https://doi.org/10.1109/MSP.2018.2701164
Stahl BC, Antoniou J, Ryan M, Macnish K, Jiya T (2021) Organisational responses to the ethical issues of artificial intelligence. AI Soc. https://doi.org/10.1007/s00146-021-01148-6
Taylor L, Dencik L (2020) Constructing commercial data ethics. Technol Regul 2020:1–10
Vakkuri K-KKV (2020) AI ethics in industry: a research framework. https://arxiv.org/ftp/arxiv/papers/1910/1910.12695.pdf
Vakkuri V, Kemell KK (2019) Implementing AI ethics in practice: an empirical evaluation of the RESOLVEDD strategy. In: Hyrynsalmi S, Suoranta M, Nguyen-Duc A, Tyrväinen P, Abrahamsson P (eds) Software business. ICSOB 2019. Lecture Notes in Business Information Processing, vol 370. Springer, Cham. https://doi.org/10.1007/978-3-030-33742-1_21
Vakkuri V, Kemell K-K, Kultanen J, Siponen M, Abrahamsson P (2019) Preprint notes. In: Ethically aligned design of autonomous systems: industry viewpoint and an empirical study. arXiv preprint arXiv:1906.07946
Vallés M (2009) Entrevistas cualitativas (cuadernos metodológicos). Segunda edición. Centro de Investigaciones Sociológicas, Madrid
Watson GJ, Desouza KC, Ribiere VM, Lindič J (2021) Will AI ever sit at the C-suite table? The future of senior leadership. Bus Horiz. https://doi.org/10.1016/j.bushor.2021.02.011
Whittlestone J, Nyrup R, Alexandrova A, Cave S (2019) The role and limits of principles in AI ethics: towards a focus on tensions. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp 195-200
Winfield AF, Jirotka M (2018) Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philosoph Trans Royal Soc A: Math Phys Eng Sci 376(2133):20180085
Winfield AF, Michael K, Pitt J, Evers V (2019) Machine ethics: the design and governance of ethical AI and autonomous systems. Proc IEEE 107(3):509–517. https://doi.org/10.1109/JPROC.2019.2900622
Yeung K, Howes A, Pogrebna G (2019) AI governance by human rights-centred design, deliberation and oversight: an end to ethics washing. The Oxford handbook of AI ethics. Oxford University Press
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix 1: Interview script
Topic | Question | Time |
---|---|---|
Opening | Project objectives | 2′ |
Presentation—role in the organisation—approach to AI | 5′ | |
Describe a project you are working on or have worked on recently (last 3–6 months) Objective Users/segment AI’s influence on the project Tools used | 5′ | |
Guidelines and principles | Principles/guidelines used for development—in which phases (design, test, control, implementation?) Product design specific code (AI) Does it incorporate ethics criteria into the decision-making model? | 10′ |
Bias and fairness | What do you mean by bias and equity? Incorporate criteria for these concepts Examples? how an AI system can perpetuate existing biases) | 10′ |
Explicability | What do you mean by explainability and interpretability? Are they different? Incorporate criteria for this concepts Examples? Kind of algorithms, kind of tools | 10′ |
Privacy | What do you mean by privacy? Incorporate criteria for this concept Examples? Data source, data type, collection, use, storage | 10′ |
Issues | Problems/dilemmas encountered | 5′ |
Closing | Any final reflections? Thank you and next steps | 5′ |
Appendix 2: Focus groups script discussion group
Welcome | 3′ | |
A brief description of the objective of the investigation | 2′ | |
Introductory question | General impression on the draft report | 10′ |
Key issues | Main question 1: (show the findings table). From this list of findings, which ones do you consider most relevant and why? Main question 2: What has caught your eye? | 15′ 15′ |
Closing | Closing question 1: What next step would you consider attractive to deepen the investigation? | 15′ |
Thank you and next steps | 5′ |
Appendix 3: Codes and groups of codes
Codes | ||
---|---|---|
Accuracy | Against profit margin | Algorithm supervision |
Algorithm training | Algorithms carry bias | Anonymization not enough |
Anonymization solutions | Anonymization Used | Applications |
Asilomar principles | Autonomy | Balanced data |
Beneficence | Benefit vs. the risk of being wrong | BIAS |
Bias by correlation | Bias due to the analyst work | Bias due to the origin of data |
Bias in the data | Bias is a key topic | Bias is no concern |
Bias is not critical | Bias not found | Bias people vs. machines |
Bias vs. diversity | Bias vs. explainability | Bias vs. objective criteria |
Bias vs. unbalanced data | Broad approach to explainability | Business KPIs and Model KPIs |
Business team’s expectations | Business teams vs. technical teams | Certification |
C-level’s interest in reputation | Client does not ask ethical questions | Client more interested in the result than in the why |
Client wants to know why | Clients | Client’s criteria on privacy |
Client’s ethical criteria | Client’s specific training | Colinearity |
Company data and external data | Company size | Complexity of explainability |
Compliance mechanisms | Control of data in origin | Controlled access |
Cross data analysis | Cross ethical training | Cross-integration |
Data availability is key | Data culture | Data culture training |
Data gathering | Data governance | Data misuse risk |
Data ownership | Data quality | Dataset selection |
Data source problem due to decalibration | Decision-making | Degradation in OPs |
DevOps principles | Differential privacy is not well known | Diversity |
Effect on employment | Ethical checkpoints | Ethical concern is not that important |
Ethical dimension | Ethical training not needed | Ethics |
Ethics as a process | Ethics by design not used | Ethics by design used |
Ethics of data vs. system | Ethics of system vs. data | Ethics training—not done |
EU vs. USA differences | Explainability | Explainability—life cycle |
Explainability—not feasible | Explainability fostered by regulation | Fast deployment |
Federated learning | Federated learning and bias | Future |
Future of regulation | Gap between principles and practice | GDPR |
Gender bias | General knowledge about principles | Global explanations |
Google principles | Hierarchical analysis to control for bias | Human being |
Human control | Hype | Identification of design and control stages |
Importance of regulation vs. ethics | Individual knowledge vs. procedures | Inequality |
Integrative approach | Internal client vs. external client | Internal training is key |
Interpretability | ISO 27001 | Knowledge about LIME |
Knowledge about SHAP | Lack of auditing | Lack of formal checkpoints |
Lack of metrics and indicators | Local explanations | Low interest on the why but high interest in results |
Magic expectations | Misuse | MLOps principles |
Model selection | Montreal principles | More reliability for bigger impact |
Necessity vs. correlation | Need for training on ethical issues | Need for code of ethics |
Nilsson principles | No personal data from users | Nondisclosure agreement |
Normalisation fosters training | Open data | Opportunities |
Organisational culture | Organisation’s role is key | Origin of data |
Own guidelines | Own technology | Penalties and fines |
Personal data usage responsibility | Personal interest in principles | Principles |
Principles are too general | Principles vs. ethical culture | Privacy |
Privacy check | Privacy during model training | Privacy is the top concern |
Privacy vs. profit | Procedures | Production |
Reactive behaviour | Real vs. training data | Recommendation to customers |
Regressions are explainable | Regulation fosters privacy | Regulation is not enough |
Reliability is application dependent | Reliability vs. profit | Reputation |
Respect to people | Results-oriented | Risk of anonymization |
Robustness | Safety vs. freedom | Sectors |
Sensitive variables | Shared responsibility | SLA—service level agreement |
Social responsibility | Standardisation | Stored data vs. manual scanning |
Sustainability | Symbolic IA—explainable | Synthetic data used |
Terms of use | Textual explanations | Third-party technology |
Third-party tools | To prevent bias during model training | Too many principles |
Tools | Tools are not the main issue | Towards explainability |
Traceability | Trade-off accuracy—ethics | Trade-off explainability—precision |
Transparency | Transparent algorithms | Trust |
Trust over explainability | UE principles | Use of anonymised data |
User data control | User does not know or does not care | Utility of bias |
Visualisation | Work together with client’s team |
Groups of codes |
---|
Accountability |
Customers |
Ethics |
Explainability |
Fairness |
Good practices |
Origin of data |
Other issues |
Principles |
Privacy |
Standardization |
Appendix 4: Network maps (groups of codes)
Rights and permissions
About this article
Cite this article
Ibáñez, J.C., Olmeda, M.V. Operationalising AI ethics: how are companies bridging the gap between practice and principles? An exploratory study. AI & Soc 37, 1663–1687 (2022). https://doi.org/10.1007/s00146-021-01267-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-021-01267-0