Abstract
Assuring the safety of systems that use Artificial Intelligence (AI), specifically Machine Learning (ML) components, is difficult because of the unique challenges that AI presents for current assurance practice. However, what is also missing is an overall understanding of this multi-disciplinary problem space. In this paper, a model is given that frames the challenges into three categories which are aligned to the reasons why they occur. Armed with a common picture of where existing issues and solutions “fit-in”, the aim is to help bridge cross-domain conceptual gaps and provide a clearer understanding to safety practitioners, ML experts, regulators and anyone involved in the assurance of a system with AI.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
Note that this is true for traditional systems, however there is exponentially more uncertainty for ML system behaviour.
References
Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., Mané, D.: Concrete problems in AI safety (2016). arXiv preprint arXiv:1606.06565
Banks, A., Ashmore, R.: Requirements assurance in machine learning. In: Proceedings of the AAAI Workshop on Artificial Intelligence Safety 2019. pp. 14–21. Springer (2018)
Burton, S., Gauerhof, L., Heinzemann, C.: Making the case for safety of machine learning in highly automated driving. In: Tonetta, S., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2017. LNCS, vol. 10489, pp. 5–16. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66284-8_1
Douthwaite, M., Kelly, T.: Safety-critical software and safety-critical artificial intelligence: integrating new practices and new safety concerns for AI systems. In: Proceedings of the Twenty-sixth Safety-Critical Systems Symposium (2018)
Gunning, D.: Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), nd Web (2017)
Hawkins, R., Habli, I., Kelly, T.: The principles of software safety assurance. In: 31st International System Safety Conference (2013)
Koopman, P., Kane, A., Black, J.: Credible autonomy safety argumentation. In: 27th Safety-Critical Systems Symposium, February 2019
NHTSA: National Motor Vehicle Crash Causation Survey: Report to Congress. Technical report. National Highway Traffic Safety Administration, July 2008
Picardi, C., Hawkins, R., Paterson, C., Habli, I.: A pattern for arguing the assurance of machine learning in medical diagnosis systems. In: International Conference on Computer Safety, Reliability, and Security (to appear, 2019)
Rushby, J.: Quality measures and assurance for AI (artificial intelligence) software (1988)
Yampolskiy, R.: Taxonomy of pathways to dangerous AI (2015). arXiv preprint arXiv:1511.03246
Acknowledgments
Thanks to the Assuring Autonomy International Programme (AAIP) for support of this work.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Fang, X., Johnson, N. (2019). Three Reasons Why: Framing the Challenges of Assuring AI. In: Romanovsky, A., Troubitsyna, E., Gashi, I., Schoitsch, E., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2019. Lecture Notes in Computer Science(), vol 11699. Springer, Cham. https://doi.org/10.1007/978-3-030-26250-1_22
Download citation
DOI: https://doi.org/10.1007/978-3-030-26250-1_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-26249-5
Online ISBN: 978-3-030-26250-1
eBook Packages: Computer ScienceComputer Science (R0)