Skip to main content

Three Reasons Why: Framing the Challenges of Assuring AI

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 11699))

Abstract

Assuring the safety of systems that use Artificial Intelligence (AI), specifically Machine Learning (ML) components, is difficult because of the unique challenges that AI presents for current assurance practice. However, what is also missing is an overall understanding of this multi-disciplinary problem space. In this paper, a model is given that frames the challenges into three categories which are aligned to the reasons why they occur. Armed with a common picture of where existing issues and solutions “fit-in”, the aim is to help bridge cross-domain conceptual gaps and provide a clearer understanding to safety practitioners, ML experts, regulators and anyone involved in the assurance of a system with AI.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Note that this is true for traditional systems, however there is exponentially more uncertainty for ML system behaviour.

References

  1. Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., Mané, D.: Concrete problems in AI safety (2016). arXiv preprint arXiv:1606.06565

  2. Banks, A., Ashmore, R.: Requirements assurance in machine learning. In: Proceedings of the AAAI Workshop on Artificial Intelligence Safety 2019. pp. 14–21. Springer (2018)

    Google Scholar 

  3. Burton, S., Gauerhof, L., Heinzemann, C.: Making the case for safety of machine learning in highly automated driving. In: Tonetta, S., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2017. LNCS, vol. 10489, pp. 5–16. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66284-8_1

    Chapter  Google Scholar 

  4. Douthwaite, M., Kelly, T.: Safety-critical software and safety-critical artificial intelligence: integrating new practices and new safety concerns for AI systems. In: Proceedings of the Twenty-sixth Safety-Critical Systems Symposium (2018)

    Google Scholar 

  5. Gunning, D.: Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), nd Web (2017)

    Google Scholar 

  6. Hawkins, R., Habli, I., Kelly, T.: The principles of software safety assurance. In: 31st International System Safety Conference (2013)

    Google Scholar 

  7. Koopman, P., Kane, A., Black, J.: Credible autonomy safety argumentation. In: 27th Safety-Critical Systems Symposium, February 2019

    Google Scholar 

  8. NHTSA: National Motor Vehicle Crash Causation Survey: Report to Congress. Technical report. National Highway Traffic Safety Administration, July 2008

    Google Scholar 

  9. Picardi, C., Hawkins, R., Paterson, C., Habli, I.: A pattern for arguing the assurance of machine learning in medical diagnosis systems. In: International Conference on Computer Safety, Reliability, and Security (to appear, 2019)

    Google Scholar 

  10. Rushby, J.: Quality measures and assurance for AI (artificial intelligence) software (1988)

    Google Scholar 

  11. Yampolskiy, R.: Taxonomy of pathways to dangerous AI (2015). arXiv preprint arXiv:1511.03246

Download references

Acknowledgments

Thanks to the Assuring Autonomy International Programme (AAIP) for support of this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nikita Johnson .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fang, X., Johnson, N. (2019). Three Reasons Why: Framing the Challenges of Assuring AI. In: Romanovsky, A., Troubitsyna, E., Gashi, I., Schoitsch, E., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2019. Lecture Notes in Computer Science(), vol 11699. Springer, Cham. https://doi.org/10.1007/978-3-030-26250-1_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-26250-1_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-26249-5

  • Online ISBN: 978-3-030-26250-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics