Skip to main content

From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices

  • Chapter
  • First Online:
Ethics, Governance, and Policies in Artificial Intelligence

Abstract

The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Samuel in Science, 132(3429):741–742, 1960. https://doi.org/10.1126/science.132.3429.741; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the ‘what’ of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)—rather than on practices, the ‘how.’ Awareness of the potential issues is increasing at a fast rate, but the AI community’s ability to take action to mitigate the associated risks is still at its infancy. Our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The difference between ethics by design and pro-ethical design is the following: ethics by design can be paternalistic in ways that constrain the choices of agents, because it makes some options less easily available or not at all; instead, pro-ethical design still forces agents to make choices, but this time the nudge is less paternalistic because it does not preclude a course of action but requires agents to make up their mind about it. A simple example can clarify the difference. A speed camera is a form of nudging (drivers should respect the speed limits) but it is pro-ethical insofar as it leaves to the drivers the freedom to choose to pay a ticket, for example in case of an emergency. On the contrary, in terms of ethics by design, speed bumps are a different kind of traffic calming measure designed to slow down vehicles and improve safety. They may seem like a good idea, but they involve a physical alteration of the road, which is permanent and leaves no real choice to the driver. This means that emergency vehicles, such as a medical ambulance, a police car, or a fire engine, must also slow down, even when responding to an emergency.

  2. 2.

    Google’s AI Principles: https://www.blog.google/technology/ai/ai-principles/

  3. 3.

    IBM’s everyday ethics for AI: https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf

  4. 4.

    Microsoft’s guidelines for conversational bots: https://www.microsoft.com/en-us/research/uploads/prod/2018/11/Bot_Guidelines_Nov_2018.pdf

  5. 5.

    Intel’s recommendations for public policy principles on AI: https://blogs.intel.com/policy/2017/10/18/naveen-rao-announces-intel-ai-public-policy/#gs.8qnx16

  6. 6.

    The Montreal Declaration for Responsible AI: https://www.montrealdeclaration-responsibleai.com/the-declaration

  7. 7.

    House of Lords Select Committee on Artificial Intelligence: AI in the UK: ready, willing and able?: https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf

  8. 8.

    European Commission’s Ethics Guidelines for Trustworthy AI: https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/1

  9. 9.

    Future of Life’s Asilomar AI Principles: https://futureoflife.org/ai-principles/

  10. 10.

    IEEE General Principles of Ethical Autonomous and Intelligent Systems: http://alanwinfield.blogspot.com/2019/04/an-updated-round-up-of-ethical.html

  11. 11.

    Floridi et al. (2018).

  12. 12.

    We say fragile here as there are gaps across the different sets of principles and all use slightly different terminology, making it hard to guarantee that the exact same meaning is intended in all cases. Further-more, as these principles have no legal grounding there is nothing to prevent any individual country (or indeed company) from suddenly choosing to adopt a different set for purposes of convenience or competitiveness.

  13. 13.

    Digital ethics shopping is the malpractice of choosing, adapting, or revising (“mixing and matching”) ethical principles, guidelines, codes, frameworks or other similar standards (especially but not only in the ethics of AI), from a variety of available offers, in order to retrofit some pre-existing behaviours (choices, processes, strategies etc.) and hence justify them a posteriori, instead of implementing or improving new behaviours by benchmarking them against public, ethical standards” (Floridi 2019c).

  14. 14.

    Ethics bluewashing is the malpractice of making unsubstantiated or misleading claims about, or implementing superficial measures in favour of, the ethical values and benefits of digital processes, products, services, or other solutions in order to appear more digitally-ethical than one is.” (Floridi 2019c).

  15. 15.

    Ethics shirking is the malpractice of doing increasingly less “ethical work” (such as fulfilling duties, respecting rights, honouring commitments, etc.) in a given context the lower the return of such ethical work in that context is mistakenly perceived to be.” (Floridi 2019c).

  16. 16.

    More detail is available here: https://ai-auditingframework.blogspot.com/2019/03/an-overview-of-auditing-framework-for_26.html

  17. 17.

    Scopus is the largest abstract and citation database of peer-reviewed literature: scientific journals, books and conference proceedings: https://www.scopus.com/home.uri

  18. 18.

    arXiv provides open access to over 1,532,009 e-prints in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics: https://arxiv.org/

  19. 19.

    PhilPapers is an index and bibliography of philosophy which collates research content from journals, books, open access archives and papers from relevant conferences such as IACAP. The index currently contains more than 2,377,536 entries. https://philpapers.org/

  20. 20.

    This total includes references related specifically to discourse ethics after an anonymous reviewer made the excellent suggestion that this literature be used as a theoretical frame for the typology.

  21. 21.

    The full list of sources can be accessed here: https://medium.com/@jessicamorley/applied-ai-ethics-reading-resource-list-ed9312499c0a

  22. 22.

    We would like to thank one of the anonymous reviewers for suggesting this framing, it represents a significant improvement to the theoretical grounding of this paper.

  23. 23.

    We recognise that there is an extremely rich literature on ML fairness which this paper does not cover. Much (although not all) of this literature focuses on the definition of fairness and the statistical means of implementing this which sits slightly outside the scope of the typology which aims to highlight tools and methods that facilitate discussion about the ethical nature of one design decision over another. To fit an entire decade’s worth of literature into a row on a table would not do it justice.

  24. 24.

    We would like to thank one of the anonymous reviewers for making this important point.

  25. 25.

    It is entirely possible that this is not always the case and that there may be instances where an explicable system has, for example, still had a negative impact on autonomy. Additionally, this view that transparency as explanation is key to accountability is one that is inherently western in perspective and those of other cultures may have a different viewpoint. We make the assumption here for simplicity’s sake.

  26. 26.

    See for example Johansson et al. (2016), Lakkaraju et al. (2017), Russell et al. (2017), and Wachter et al. (2017).

References

Download references

Funding

This study was funded by Digital Catapult.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luciano Floridi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Morley, J., Floridi, L., Kinsey, L., Elhalal, A. (2021). From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices. In: Floridi, L. (eds) Ethics, Governance, and Policies in Artificial Intelligence. Philosophical Studies Series, vol 144. Springer, Cham. https://doi.org/10.1007/978-3-030-81907-1_10

Download citation

Publish with us

Policies and ethics