Skip to main content
Log in

AI risk assessment using ethical dimensions

  • Original Research
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

In the design, development, and use of artificial intelligence systems, it is important to ensure that they are safe and trustworthy. This requires a systematic approach to identifying, analyzing, evaluating, mitigating, and monitoring risks throughout the entire lifecycle of an AI system. While standardized risk management processes are being developed, organizations may struggle to implement AI risk management effectively and efficiently due to various implementation gaps. This paper discusses the main gaps in AI risks management and describes a tool that can be used to support organizations in AI risk assessment. The tool consists of a structured process for identifying, analyzing, and evaluating risks in the context of specific AI applications and environments. The tool accounts for the multidimensionality and context-sensitivity of AI risks. It provides a visualization and quantification of AI risks and can inform strategies to mitigate and minimize those risks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data availability

This manuscript does not involve any new data collection or analysis. The work presented is conceptual and based on previously published research and theoretical analysis. Therefore, there are no data sets generated or analyzed during the current study that can be shared or made publicly available.

References

  1. AI Act. 2021. Proposal for a regulation of the European parliament and of the council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206. Accessed 4 Mar 2023

  2. AI HLEG. (2020). Assessment list for trustworthy artificial intelligence (ALTAI). https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=68342. Accessed 4 Mar 2023

  3. Aven, T., Ben-Haim, Y., Andersen, H. B., Cox, T., Droguett, E. L., Greenberg, M., Guikema, S., Kröger, W., Renn, O., Thompson, K. M., Zio, E. 2018. Society for risk analysis glossary. https://www.sra.org/wp-content/uploads/2020/04/SRA-Glossary-FINAL.pdf. Accessed 4 Mar 2023

  4. Dhar, P.: The carbon impact of artificial intelligence. Nat. Mach. Intell. (2020). https://doi.org/10.1038/s42256-020-0219-9

    Article  Google Scholar 

  5. (EC Working Paper, 2021). Statewatch | EU: Artificial intelligence act: justice sector and high-risk systems; internal security; migration and borders; comments and presentations. (n.d.). Retrieved 2 Feb 2023. https://www.statewatch.org/news/2022/january/eu-artificial-intelligence-act-justice-sector-and-high-risk-systems-internal-security-migration-and-borders-comments-and-presentations/

  6. EY. (2021). A survey of AI risk assessment methodologies. https://www.trilateralresearch.com/wp-content/uploads/2022/01/A-survey-of-AI-Risk-Assessment-Methodologies-full-report.pdf. Accessed 4 Mar 2023

  7. Floridi, L.: The ethics of information. Oxford University Press (2013)

    Book  Google Scholar 

  8. Floridi, L., Sanders, J.W.: The method of levels of abstraction. In: Negrotti, M. (ed.) Yearbook of the artificial. Nature, culture and technology. Models in contemporary sciences, pp. 177–220. Peter Lang (2008)

    Google Scholar 

  9. Floridi, L.: Soft ethics: its application to the general data protection regulation and its dual advantage. Philos. Technol. 31(2), 163–167 (2018). https://doi.org/10.1007/s13347-018-0315-5

    Article  MathSciNet  Google Scholar 

  10. Floridi, L., Sanders, J.W.: On the morality of artificial agents. Mind. Mach. 14(3), 349–379 (2004). https://doi.org/10.1023/B:MIND.0000035461.63578.9d

    Article  Google Scholar 

  11. ISO/IEC. Forthcoming. ISO/IEC DIS 23894—Information technology—Artificial intelligence—Guidance on risk management. https://www.iso.org/standard/77304.html. Accessed 4 Mar 2023

  12. Lütge, C., Hohma, E., Boch, A., Poszler, F., Corrigan, C. (2022). On a risk-based assessment approach to AI ethics governance. IEAI White Paper. Available at: https://www.ieai.sot.tum.de/wp-content/uploads/2022/06/IEAI-White-Paper-on-Risk-Management-Approach_2022-FINAL.pdf. Accessed 4 Mar 2023

  13. McGregor, S. 2021. Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the AAAI Conference on Artificial Intelligence, 35(17), Article 17. https://doi.org/10.1609/aaai.v35i17.17817

  14. Microsoft. 2022. Types of harm—Azure Application Architecture Guide. https://learn.microsoft.com/en-us/azure/architecture/guide/responsible-innovation/harms-modeling/type-of-harm. Accessed 4 Mar 2023

  15. NIST. 2022. AI risk management framework. https://www.nist.gov/itl/ai-risk-management-framework. Accessed 4 Mar 2023

  16. Panai, E., Light, R. Forthcoming. Raising the Ethical voice of the periphery in the construction of AI ethics frameworks. Preprint

  17. Wernaart, B.: Developing a roadmap for the moral programming of smart technology. Technol. Soc. 64, 101466 (2021). https://doi.org/10.1016/j.techsoc.2020.101466

    Article  Google Scholar 

  18. Wirtz, B.W., Weyerer, J.C., Kehl, I.: Governance of artificial intelligence: a risk and guideline-based integrative framework. Gov. Inf. Q. 39(4), 101685 (2022). https://doi.org/10.1016/j.giq.2022.101685

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alessio Tartaro.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tartaro, A., Panai, E. & Cocchiaro, M.Z. AI risk assessment using ethical dimensions. AI Ethics 4, 105–112 (2024). https://doi.org/10.1007/s43681-023-00401-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43681-023-00401-6

Keywords

Navigation