Advertisement

Three Reasons Why: Framing the Challenges of Assuring AI

  • Xinwei Fang
  • Nikita JohnsonEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11699)

Abstract

Assuring the safety of systems that use Artificial Intelligence (AI), specifically Machine Learning (ML) components, is difficult because of the unique challenges that AI presents for current assurance practice. However, what is also missing is an overall understanding of this multi-disciplinary problem space. In this paper, a model is given that frames the challenges into three categories which are aligned to the reasons why they occur. Armed with a common picture of where existing issues and solutions “fit-in”, the aim is to help bridge cross-domain conceptual gaps and provide a clearer understanding to safety practitioners, ML experts, regulators and anyone involved in the assurance of a system with AI.

Keywords

Machine Learning Safety Assurance Sensors 

Notes

Acknowledgments

Thanks to the Assuring Autonomy International Programme (AAIP) for support of this work.

References

  1. 1.
    Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., Mané, D.: Concrete problems in AI safety (2016). arXiv preprint arXiv:1606.06565
  2. 2.
    Banks, A., Ashmore, R.: Requirements assurance in machine learning. In: Proceedings of the AAAI Workshop on Artificial Intelligence Safety 2019. pp. 14–21. Springer (2018)Google Scholar
  3. 3.
    Burton, S., Gauerhof, L., Heinzemann, C.: Making the case for safety of machine learning in highly automated driving. In: Tonetta, S., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2017. LNCS, vol. 10489, pp. 5–16. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66284-8_1CrossRefGoogle Scholar
  4. 4.
    Douthwaite, M., Kelly, T.: Safety-critical software and safety-critical artificial intelligence: integrating new practices and new safety concerns for AI systems. In: Proceedings of the Twenty-sixth Safety-Critical Systems Symposium (2018)Google Scholar
  5. 5.
    Gunning, D.: Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), nd Web (2017)Google Scholar
  6. 6.
    Hawkins, R., Habli, I., Kelly, T.: The principles of software safety assurance. In: 31st International System Safety Conference (2013)Google Scholar
  7. 7.
    Koopman, P., Kane, A., Black, J.: Credible autonomy safety argumentation. In: 27th Safety-Critical Systems Symposium, February 2019Google Scholar
  8. 8.
    NHTSA: National Motor Vehicle Crash Causation Survey: Report to Congress. Technical report. National Highway Traffic Safety Administration, July 2008Google Scholar
  9. 9.
    Picardi, C., Hawkins, R., Paterson, C., Habli, I.: A pattern for arguing the assurance of machine learning in medical diagnosis systems. In: International Conference on Computer Safety, Reliability, and Security (to appear, 2019)Google Scholar
  10. 10.
    Rushby, J.: Quality measures and assurance for AI (artificial intelligence) software (1988)Google Scholar
  11. 11.
    Yampolskiy, R.: Taxonomy of pathways to dangerous AI (2015). arXiv preprint arXiv:1511.03246

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of YorkYorkUK

Personalised recommendations