Skip to main content
Log in

Operationalising AI ethics: how are companies bridging the gap between practice and principles? An exploratory study

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Despite the increase in the research field of ethics in artificial intelligence, most efforts have focused on the debate about principles and guidelines for responsible AI, but not enough attention has been given to the “how” of applied ethics. This paper aims to advance the research exploring the gap between practice and principles in AI ethics by identifying how companies are applying those guidelines and principles in practice. Through a qualitative methodology based on 22 semi-structured interviews and two focus groups, the goal of the current study is to understand how companies approach ethical issues related to AI systems. A structured analysis of the transcripts brought out many actual practices and findings, which are presented around the following main research topics: ethics and principles, privacy, explainability, and fairness. The interviewees also raised issues of accountability and governance. Finally, some recommendations are suggested such as developing specific sector regulations, fostering a data-driven organisational culture, considering the algorithm’s complete life cycle, developing and using a specific code of ethics, and providing specific training on ethical issues. Despite some obvious limitations, such as the type and number of companies interviewed, this work identifies real examples and direct priorities to advance the research exploring the gap between practice and principles in AI ethics, with a specific focus on Spanish companies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Source: developed by the authors

Fig. 2

Source: developed by the authors

Similar content being viewed by others

Notes

  1. See Dastin (2018).

  2. See Confessore (2018).

  3. https://www.pwc.com/gx/en/issues/data-and-analytics/artificial-intelligence/what-is-responsible-ai.html.

  4. For an updated list of principles, please check https://algorithmwatch.org/en/project/ai-ethics-guidelines-global-inventory/.

  5. For example, https://mapa.estrategiaia.es/mapa, that refers to an updated list of organisations related to AI in Spain.

  6. The number after “Quote” refers to the company ID (Table 3) from which the actual quote has been drawn.

  7. https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Javier Camacho Ibáñez.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Interview script

Topic

Question

Time

Opening

Project objectives

2′

Presentation—role in the organisation—approach to AI

5′

Describe a project you are working on or have worked on recently (last 3–6 months)

Objective

Users/segment

AI’s influence on the project

Tools used

5′

Guidelines and principles

Principles/guidelines used for development—in which phases (design, test, control, implementation?)

Product design specific code (AI)

Does it incorporate ethics criteria into the decision-making model?

10′

Bias and fairness

What do you mean by bias and equity?

Incorporate criteria for these concepts

Examples? how an AI system can perpetuate existing biases)

10′

Explicability

What do you mean by explainability and interpretability? Are they different? Incorporate criteria for this concepts

Examples? Kind of algorithms, kind of tools

10′

Privacy

What do you mean by privacy?

Incorporate criteria for this concept

Examples? Data source, data type, collection, use, storage

10′

Issues

Problems/dilemmas encountered

5′

Closing

Any final reflections?

Thank you and next steps

5′

  1. Source: developed by the authors

Appendix 2: Focus groups script discussion group

Welcome

3′

A brief description of the objective of the investigation

2′

Introductory question

General impression on the draft report

10′

Key issues

Main question 1: (show the findings table). From this list of findings, which ones do you consider most relevant and why?

Main question 2: What has caught your eye?

15′

15′

Closing

Closing question 1: What next step would you consider attractive to deepen the investigation?

15′

Thank you and next steps

5′

  1. Source: developed by the authors

Appendix 3: Codes and groups of codes

Codes

Accuracy

Against profit margin

Algorithm supervision

Algorithm training

Algorithms carry bias

Anonymization not enough

Anonymization solutions

Anonymization Used

Applications

Asilomar principles

Autonomy

Balanced data

Beneficence

Benefit vs. the risk of being wrong

BIAS

Bias by correlation

Bias due to the analyst work

Bias due to the origin of data

Bias in the data

Bias is a key topic

Bias is no concern

Bias is not critical

Bias not found

Bias people vs. machines

Bias vs. diversity

Bias vs. explainability

Bias vs. objective criteria

Bias vs. unbalanced data

Broad approach to explainability

Business KPIs and Model KPIs

Business team’s expectations

Business teams vs. technical teams

Certification

C-level’s interest in reputation

Client does not ask ethical questions

Client more interested in the result than in the why

Client wants to know why

Clients

Client’s criteria on privacy

Client’s ethical criteria

Client’s specific training

Colinearity

Company data and external data

Company size

Complexity of explainability

Compliance mechanisms

Control of data in origin

Controlled access

Cross data analysis

Cross ethical training

Cross-integration

Data availability is key

Data culture

Data culture training

Data gathering

Data governance

Data misuse risk

Data ownership

Data quality

Dataset selection

Data source problem due to decalibration

Decision-making

Degradation in OPs

DevOps principles

Differential privacy is not well known

Diversity

Effect on employment

Ethical checkpoints

Ethical concern is not that important

Ethical dimension

Ethical training not needed

Ethics

Ethics as a process

Ethics by design not used

Ethics by design used

Ethics of data vs. system

Ethics of system vs. data

Ethics training—not done

EU vs. USA differences

Explainability

Explainability—life cycle

Explainability—not feasible

Explainability fostered by regulation

Fast deployment

Federated learning

Federated learning and bias

Future

Future of regulation

Gap between principles and practice

GDPR

Gender bias

General knowledge about principles

Global explanations

Google principles

Hierarchical analysis to control for bias

Human being

Human control

Hype

Identification of design and control stages

Importance of regulation vs. ethics

Individual knowledge vs. procedures

Inequality

Integrative approach

Internal client vs. external client

Internal training is key

Interpretability

ISO 27001

Knowledge about LIME

Knowledge about SHAP

Lack of auditing

Lack of formal checkpoints

Lack of metrics and indicators

Local explanations

Low interest on the why but high interest in results

Magic expectations

Misuse

MLOps principles

Model selection

Montreal principles

More reliability for bigger impact

Necessity vs. correlation

Need for training on ethical issues

Need for code of ethics

Nilsson principles

No personal data from users

Nondisclosure agreement

Normalisation fosters training

Open data

Opportunities

Organisational culture

Organisation’s role is key

Origin of data

Own guidelines

Own technology

Penalties and fines

Personal data usage responsibility

Personal interest in principles

Principles

Principles are too general

Principles vs. ethical culture

Privacy

Privacy check

Privacy during model training

Privacy is the top concern

Privacy vs. profit

Procedures

Production

Reactive behaviour

Real vs. training data

Recommendation to customers

Regressions are explainable

Regulation fosters privacy

Regulation is not enough

Reliability is application dependent

Reliability vs. profit

Reputation

Respect to people

Results-oriented

Risk of anonymization

Robustness

Safety vs. freedom

Sectors

Sensitive variables

Shared responsibility

SLA—service level agreement

Social responsibility

Standardisation

Stored data vs. manual scanning

Sustainability

Symbolic IA—explainable

Synthetic data used

Terms of use

Textual explanations

Third-party technology

Third-party tools

To prevent bias during model training

Too many principles

Tools

Tools are not the main issue

Towards explainability

Traceability

Trade-off accuracy—ethics

Trade-off explainability—precision

Transparency

Transparent algorithms

Trust

Trust over explainability

UE principles

Use of anonymised data

User data control

User does not know or does not care

Utility of bias

Visualisation

Work together with client’s team

 
  1. Source: developed by the authors

Groups of codes

Accountability

Customers

Ethics

Explainability

Fairness

Good practices

Origin of data

Other issues

Principles

Privacy

Standardization

  1. Source: developed by the authors

Appendix 4: Network maps (groups of codes)

figure a
figure b
figure c
figure d
figure e
figure f
figure g
figure h
figure i
figure j
figure k

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ibáñez, J.C., Olmeda, M.V. Operationalising AI ethics: how are companies bridging the gap between practice and principles? An exploratory study. AI & Soc 37, 1663–1687 (2022). https://doi.org/10.1007/s00146-021-01267-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-021-01267-0

Keywords

Navigation