Abstract
AI has become a hot topic both among the fervent integrators and the terrified apocalyptics; the formers see AI as the ultimate panacea, while the latter look at it as a great danger. In between, there are several organisations and individuals who consider AI to be good for humanity provided it respects certain limits. Collaboration and contributing to international associations and private firms, these people address the problem by trying to mitigate some low-level errors, such as negative biases, or high-level ones, such as problems of accountability and governance. Each of these bodies works towards a goal: reducing, through regulations, standards, advice, assurances, and independent audits. A particular phenomenon has been observed: internationally, ethical principles and approaches to risk reduction are becoming increasingly similar. Based on this assumption, the authors introduce an analogy to understand the ongoing synchronisation effect. Aware that an ultimate alignment will be impossible (due to the tendency to incompleteness of ethical reasoning) and strongly discouraged (due to the tendency of universal systems to flatten the peripheral voices), the authors suggest a theoretical investigation of the phenomenon of synchronisation and invite to feed it practically through the available tools of data governance. They show how ethics-based audits can be a suitable tool for the task through the presentation of a case study.
Similar content being viewed by others
Data Availability
Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.
Availability of Materials
Not applicable.
Notes
The acronyms AI is familiar enough to be used without explaining that it refers to artificial intelligence.
European Union, Council of Europe, Organisation for Economic Co-operation and Development, Office of the United Nations High Commissioner for Human Rights, UK Information Commissioner’s Office.
International Organization for Standardization/International Electrotechnical Commission, Institute of Electrical and Electronics Engineers.
European Committee for Standardization/European Committee for Electrotechnical Standardization (the original acronyms derive from the French version).
Association Française de Normalisation, British Standards Institution, Ente Nazionale Italiano di Unificazione.
Information Systems Audit and Control Association, Responsible AI Institute, Global Digital Foundation.
The Assessment List on Trustworthy Artificial Intelligence, EU High Level Expert Group.
A statistical analysis for bias remediation created by O’Neil Risk Consulting & Algorithmic Auditing.
Human Rights, Democracy, and the Rule of Law Impact Assessment for AI created by the Conseil of Europe (CAHAI–Ad Hoc Committee on Artificial Intelligence).
According to YouTube, the five most viewed videos on the subject have totalled more than 30 million views.
A Māori term for a New Zealander of European descent–probably originally applied to English-speaking Europeans living in Aotearoa, New Zealand (Te Aka) (te Aka Māori Dictionary Project, 2009).
If agent A knows that P or Q and agent B knows that not P, then a supra-agent C will know that P.
References
Brown, S., Davidovic, J., & Hasan, A. (2021). The algorithm audit: Scoring the algorithms that score us. Big Data and Society, 8(1). https://doi.org/10.1177/2053951720983865
Carrier, R., & Brown, S. (2021). Taxonomy: AI audit, assurance & assessment. Retrieved May 15, 2022 from https://forhumanity.center/blog/taxonomy-ai-audit-assurance-assessment/
Coraggio, M., de Lellis, P., & di Bernardo, M. (2021). Convergence and synchronization in networks of piecewise-smooth systems via distributed discontinuous coupling. Automatica, 129, 109596. https://doi.org/10.1016/J.AUTOMATICA.2021.109596
Dotan, R. (2021). The proliferation of ai ethics principles: what’s next? Montreal AI Ethics Institute, 30(1). Retrieved May 15, 2022 from https://montrealethics.ai/the-proliferation-of-ai-ethics-principles-whats-next/
Floridi, L. (2008). The method of levels of abstraction. Minds and Machines, 18(3), 303–329. https://doi.org/10.1007/s11023-008-9113-7
Floridi, L. (2011). The philosophy of information. Oxford University Press.
Floridi, L. (2013). The ethics of information. Oxford University Press.
Floridi, L. (2018). Soft Ethics and the Governance of the Digital. Philosophy and Technology, 31(1), 1–8. https://doi.org/10.1007/s13347-018-0303-9
Floridi, L. (2020a). Il verde e il blu Idee. Raffaello Cortina Editore.
Floridi, L. (2020b). AI and its new winter: From myths to realities. Philosophy and Technology, 33(1), 1–3. https://doi.org/10.1007/s13347-020-00396-6
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1, 1–15. https://doi.org/10.1162/99608f92.8cd550d1
Floridi, L., Holweg, M., Taddeo, M., Amaya Silva, J., Mökander, J., & Wen, Y. (2022). CapAI. A procedure for conducting conformity assessment of AI systems in line with the EU Artificial Intelligence Act. Oxford University. Retrieved Juin 21, 2022 from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4064091
Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8
Hidalgo, C. (2015). Why information grows: The evolution of order, from atoms to economies. Penguin Books Limited.
Hodges, C. (2015). Conclusions: Ethical regulation. Law and Corporate Behaviour: Integrating Theories of Regulation, Enforcement, Compliance and Ethics (pp. 695–706). Hart Publishing. https://doi.org/10.5040/9781474201124.ch-022
Hofstadter, D. (1979). Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books. https://doi.org/10.1017/CBO9781107415324.004
ICO. (2020). Guidance on the AI auditing framework Draft guidance for consultation. Information Commissioner’s Office.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
Ko tō tātou kāinga tēnei. (2020). Royal Commission of Inquiry into the Attack on Christchurch Mosques on 15 March 2019. Retrieved December 12, 2021 from https://christchurchattack.royalcommission.nz/
Luhmann, N. (1995). Social systems: Vol. Writing science. Stanford University Press.
May, C. (2017). Book review: Law and corporate behaviour: Integrating theories of regulation, enforcement, compliance and ethics by Christopher Hodges. Political Studies Review, 15(1), 126–127. https://doi.org/10.1177/1478929916676949
McKenzie, P. (2021). New Zealand “Explosion of ideas”: how Māori concepts are being incorporated into New Zealand law. The Guardian. Retrieved January 03, 2022 from https://www.theguardian.com/world/2021/oct/17/explosion-of-ideas-how-maori-concepts-are-being-incorporated-into-new-zealand-law
Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507. https://doi.org/10.1038/s42256-019-0114-4
Mökander, J., & Axente, M. (2021). Ethics-based auditing of automated decision-making systems: Intervention points and policy implications. AI and Society. https://doi.org/10.1007/s00146-021-01286-x
Mökander, J., Axente, M., Casolari, F., & Floridi, L. (2021a). Conformity assessments and post-market monitoring: A guide to the role of auditing in the proposed European AI regulation. Minds and Machines, 32(2), 241–268. https://doi.org/10.1007/s11023-021-09577-4
Mökander, J., & Floridi, L. (2021). Ethics-based auditing to develop trustworthy AI. Minds and Machines, 31(2), 323–327. https://doi.org/10.1007/s11023-021-09557-8
Mökander, J., Morley, J., Taddeo, M., & Floridi, L. (2021b). Ethics-based auditing of automated decision-making systems: Nature, scope, and limitations. Science and Engineering Ethics, 27(4), 1–30. https://doi.org/10.1007/s11948-021-00319-4
Morley, J., Elhalal, A., Garcia, F., Kinsey, L., Mökander, J., & Floridi, L. (2021). Ethics as a service: A pragmatic operationalisation of AI ethics. Minds and Machines, 31(2), 239–256. https://doi.org/10.1007/s11023-021-09563-w
Nasution, D., & Östermark, R. (2020). The impact of auditors’ awareness of the profession’s reputation for independence on auditors’ ethical judgement. Social Responsibility Journal, 16(8), 1087–1105. https://doi.org/10.1108/SRJ-05-2018-0117
Parsons, T. (1949). Essays in sociological theory. Free Press.
Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. FAT* 2020 - Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 33–44). https://doi.org/10.1145/3351095.3372873
Ramirez, J. P., & Nijmeijer, H. (2020). The secret of the synchronized pendulums. Physics World, 33(1), 36–40. https://doi.org/10.1088/2058-7058/33/1/28
Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Data and discrimination: Converting critical concerns into productive inquiry. 64th Annual Meeting of the International Communication Association.
te Aka Māori Dictionary Project. (2009). Te Aka Māori Dictionary. Retrieved February 13, 2022, from https://maoridictionary.co.nz/
Te Tiriti o Waitangi - Treaty Of Waitangi. (1840). Retrieved February 06, 2022 from https://waitangitribunal.govt.nz/treaty-of-waitangi/te-reo-maori-version/
Warren, C. S. (1980). Uniformity of auditing standards: A replication. Journal of Accounting Research, 18(1), 312–324. https://doi.org/10.2307/2490406
Wheeler, T. (2022). U.S. regulatory inaction opened the doors for the EU to step up on internet. Retrieved May 01, 2022 from https://www.brookings.edu/blog/techtank/2022/03/29/u-s-regulatory-inaction-opened-the-doors-for-the-eu-to-step-up-on-internet/
Author information
Authors and Affiliations
Contributions
Both authors have contributed equally to the work.
Corresponding author
Ethics declarations
Consent to Participate
Not applicable.
Conflict of Interest
The authors declare that they are part of the non-profit association ForHumanity. However, the research was academically independent, and all opinions expressed in the article belongs solely to its authors.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Light, R., Panai, E. The Self-Synchronisation of AI Ethical Principles. DISO 1, 24 (2022). https://doi.org/10.1007/s44206-022-00023-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s44206-022-00023-1