Skip to main content

Advertisement

Log in

The Self-Synchronisation of AI Ethical Principles

  • Original Paper
  • Published:
Digital Society Aims and scope Submit manuscript

Abstract

AI has become a hot topic both among the fervent integrators and the terrified apocalyptics; the formers see AI as the ultimate panacea, while the latter look at it as a great danger. In between, there are several organisations and individuals who consider AI to be good for humanity provided it respects certain limits. Collaboration and contributing to international associations and private firms, these people address the problem by trying to mitigate some low-level errors, such as negative biases, or high-level ones, such as problems of accountability and governance. Each of these bodies works towards a goal: reducing, through regulations, standards, advice, assurances, and independent audits. A particular phenomenon has been observed: internationally, ethical principles and approaches to risk reduction are becoming increasingly similar. Based on this assumption, the authors introduce an analogy to understand the ongoing synchronisation effect. Aware that an ultimate alignment will be impossible (due to the tendency to incompleteness of ethical reasoning) and strongly discouraged (due to the tendency of universal systems to flatten the peripheral voices), the authors suggest a theoretical investigation of the phenomenon of synchronisation and invite to feed it practically through the available tools of data governance. They show how ethics-based audits can be a suitable tool for the task through the presentation of a case study.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data Availability

Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.

Availability of Materials

Not applicable.

Notes

  1. The acronyms AI is familiar enough to be used without explaining that it refers to artificial intelligence.

  2. European Union, Council of Europe, Organisation for Economic Co-operation and Development, Office of the United Nations High Commissioner for Human Rights, UK Information Commissioner’s Office.

  3. International Organization for Standardization/International Electrotechnical Commission, Institute of Electrical and Electronics Engineers.

  4. European Committee for Standardization/European Committee for Electrotechnical Standardization (the original acronyms derive from the French version).

  5. Association Française de Normalisation, British Standards Institution, Ente Nazionale Italiano di Unificazione.

  6. Information Systems Audit and Control Association, Responsible AI Institute, Global Digital Foundation.

  7. The Assessment List on Trustworthy Artificial Intelligence, EU High Level Expert Group.

  8. A statistical analysis for bias remediation created by O’Neil Risk Consulting & Algorithmic Auditing.

  9. Human Rights, Democracy, and the Rule of Law Impact Assessment for AI created by the Conseil of Europe (CAHAI–Ad Hoc Committee on Artificial Intelligence).

  10. According to YouTube, the five most viewed videos on the subject have totalled more than 30 million views.

  11. https://en.wikipedia.org/wiki/Pendulum

  12. A Māori term for a New Zealander of European descent–probably originally applied to English-speaking Europeans living in Aotearoa, New Zealand (Te Aka) (te Aka Māori Dictionary Project, 2009).

  13. If agent A knows that P or Q and agent B knows that not P, then a supra-agent C will know that P.

  14. https://www.dictionary.com/browse/universal

  15. https://en.wiktionary.org/wiki/absolute

  16. http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.04.0059%3Aentry%3Duniversalis

  17. https://en.wikipedia.org/wiki/2021_Facebook_leak

  18. https://en.wikipedia.org/wiki/Non-equilibrium_thermodynamics

  19. https://forhumanity.center/contributors/

References

Download references

Author information

Authors and Affiliations

Authors

Contributions

Both authors have contributed equally to the work.

Corresponding author

Correspondence to Enrico Panai.

Ethics declarations

Consent to Participate

Not applicable.

Conflict of Interest

The authors declare that they are part of the non-profit association ForHumanity. However, the research was academically independent, and all opinions expressed in the article belongs solely to its authors.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Light, R., Panai, E. The Self-Synchronisation of AI Ethical Principles. DISO 1, 24 (2022). https://doi.org/10.1007/s44206-022-00023-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s44206-022-00023-1

Keywords

Navigation