Skip to main content

Advertisement

Log in

Categorization and eccentricity of AI risks: a comparative study of the global AI guidelines

  • Research Paper
  • Published:
Electronic Markets Aims and scope Submit manuscript

Abstract

Background

Governments, enterprises, civil organizations, and academics are engaged to promote normative guidelines aimed at regulating the development and application of Artificial Intelligence (AI) in different fields such as judicial assistance, social governance, and business services.

Aim

Although more than 160 guidelines have been proposed globally, it remains uncertain whether they are sufficient to meet the governance challenges of AI. Given the absence of a holistic theoretical framework to analyze the potential risk of AI, it is difficult to determine what is overestimated and what is missing in the extant guidelines. Based on the classic theoretical model in the field of risk management, we developed a four-dimensional structure as a benchmark to analyze the risk of AI and its corresponding governance measures. The structure consists of four pairs of risks: specific-general, legal-ethical, individual-collective and generational-transgenerational.

Method

Using the framework, a comparative study of the extant guidelines is conducted by coding the 123 guidelines with 1023 articles.

Result

We find that the extant guidelines are eccentric, while collective risk and generational risk are largely underestimated by stakeholders. Based on this analysis, three gaps and conflicts are outlined for future improvements.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1

Similar content being viewed by others

Notes

  1. A comprehensive dataset of global AI Ethic Guidelines See: https://inventory.algorithmwatch.org

  2. See https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/derisking-ai-by-design-how-to-build-risk-management-into-ai-development

  3. See Boddington (2018). Alphabetical list of resources. Ethics for Artificial Intelligence https://www.cs.ox.ac.uk/efai/resources/alphabetical-list-of-resources/. Winfield (2017). A round up of robotics and AI ethics. Alan Winfield’s Web Log http://alanwinfield.blogspot.com/2019/04/an-updated-round-up-of-ethical.html. National and international AI strategies (2018). Future of Life Institute https://futureoflife.org/national-international-ai-strategies. Summaries of AI policy resources. (2018). Future of Life Institute https://futureoflife.org/ai-policy-resources/.

References

Download references

Acknowledgements

The authors would like to present thank Meiyin Huang, Peifen Li, Zongyang Li, Sisi Tang, Yu Wang, Haiming Wu, Qianwei Xu, Jing Zhang, who are students and research assistants in the Tsinghua University, and Yanglan Xu, Hao Wang, Zhihao Chen, Mingyang Wei, Jiayu Zhang,who are students in the University of Electronic Science and Technology of China, for their excellent work in the cross-checking of article codes. This work was partially supported by the National Key Research and Development Program of China (2018YFC0832305), the National Natural Science Foundation of China (71974111, 91646103), and the National Social Science Fund of China (18CZZ025).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nan Zhang.

Additional information

Responsible Editor: Lin Xiao

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Below is the link to the electronic supplementary material.

Supplementary file1 (XLSX 225 KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jia, K., Zhang, N. Categorization and eccentricity of AI risks: a comparative study of the global AI guidelines. Electron Markets 32, 59–71 (2022). https://doi.org/10.1007/s12525-021-00480-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12525-021-00480-5

Keywords

JEL

Navigation