Skip to main content

An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations

  • Chapter
  • First Online:

Part of the book series: Philosophical Studies Series ((PSSP,volume 144))

Abstract

This article reports the findings of AI4People, a year-long initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations – to assess, to develop, to incentivise, and to support good AI – which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Besides Luciano Floridi, the members of the Scientific Committee are: Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke, and Effy Vayena. Josh Cowls is the rapporteur. Thomas Burri contributed to an earlier draft.

  2. 2.

    The analysis in this and the following two sections is also available in Cowls and Floridi (2018). Further analysis and more information on the methodology employed will be presented in Cowls and Floridi (Forthcoming).

  3. 3.

    The Montreal Declaration is currently open for comments as part of a redrafting exercise. The principles we refer to here are those which were publicly announced as of 1st May, 2018.

  4. 4.

    The third version of Ethically Aligned Design will be released in 2019 following wider public consultation.

  5. 5.

    Of the six documents, the Asilomar Principles offer the largest number of principles with arguably the broadest scope. The 23 principles are organised under three headings, “research issues”, “ethics and values”, and “longer-term issues”. We have omitted consideration of the five “research issues” here as they are related specifically to the practicalities of AI development, particularly in the narrower context of academia and industry. Similarly, the Partnership’s eight Tenets consist of both intra-organisational objectives and wider principles for the development and use of AI. We include only the wider principles (the first, sixth, and seventh tenets).

  6. 6.

    Determining accountability and responsibility may usefully borrow from lawyers in Ancient Rome who would go by this formula ‘cuius commoda eius et incommoda’ (‘the person who derives an advantage from a situation must also bear the inconvenience’). A good 2,200 years old principle that has a well-established tradition and elaboration could properly set the starting level of abstraction in this field.

  7. 7.

    Of course, to the extent that AI systems are ‘products’, general tort law still applies in the same way to AI as it applies in any instance involving defective products or services that injure users or do not perform as claimed or expected.

References

Download references

Acknowledgements

This publication would not have been possible without the generous support of Atomium – European Institute for Science, Media and Democracy. We are particularly grateful to Michelangelo Baracchi Bonvicini, Atomium’s President, to Guido Romeo, its Editor in Chief, the staff of Atomium for their help, and to all the partners of the AI4People project and members of its Forum (http://www.eismd.eu/ai4people) for their feedback. The authors of this article are the only persons responsible for its contents and any remaining mistakes.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luciano Floridi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Floridi, L. et al. (2021). An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. In: Floridi, L. (eds) Ethics, Governance, and Policies in Artificial Intelligence. Philosophical Studies Series, vol 144. Springer, Cham. https://doi.org/10.1007/978-3-030-81907-1_3

Download citation

Publish with us

Policies and ethics