Skip to main content

Quantifying Assurance in Learning-Enabled Systems

  • Conference paper
  • First Online:
Computer Safety, Reliability, and Security (SAFECOMP 2020)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 12234))

Included in the following conference series:

Abstract

Dependability assurance of systems embedding machine learning (ML) components—so called learning-enabled systems (LESs)—is a key step for their use in safety-critical applications. In emerging standardization and guidance efforts, there is a growing consensus in the value of using assurance cases for that purpose. This paper develops a quantitative notion of assurance that an learning-enabled system (LES) is dependable, as a core component of its assurance case, also extending our prior work that applied to ML components. Specifically, we characterize LES assurance in the form of assurance measures: a probabilistic quantification of confidence that an LES possesses system-level properties associated with functional capabilities and dependability attributes. We illustrate the utility of assurance measures by application to a real world autonomous aviation system, also describing their role both in i) guiding high-level, runtime risk mitigation decisions and ii) as a core component of the associated dynamic assurance case.

This work was supported by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL) under contract FA8750-18-C-0094 of the Assured Autonomy Program. The opinions, findings, recommendations or conclusions expressed are those of the authors and should not be interpreted as representing the official views or policies of DARPA, AFRL, the Department of Defense, or the United States Government.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The systematic reasoning that captures the rationale why specific conclusions, e.g., of system safety, can be drawn from the evidence supplied.

  2. 2.

    Henceforth, we do not distinguish assurance properties from assurance claims.

  3. 3.

    When the assurance property is itself probabilistic, the corresponding assurance measure is deterministic, i.e., either 0 or 1.

  4. 4.

    The horizontal distance between the aircraft nose wheel and the runway centerline.

  5. 5.

    Heading refers to the compass direction in which an object is pointed; heading error (HE) here, is thus the angular distance between the aircraft heading and the runway heading.

  6. 6.

    Our industry collaborators elicited the exact performance objectives from current and proficient professional pilots.

  7. 7.

    The introduction of a second offset was motivated by our industry collaborators to integrate the assurance measure on the LES platform.

  8. 8.

    Although the content of integrating assurance measures with a Contingency Management System (CMS) is very closely related to the work here, it is not in scope for this paper, and will be the topic of a forthcoming article.

References

  1. Underwriter Laboratories Inc.: Standard for Safety for the Evaluation of Autonomous Products UL 4600, April 2020

    Google Scholar 

  2. Clothier, R., Denney, E., Pai, G.: Making a risk informed safety case for small unmanned aircraft system operations. In: 17th AIAA Aviation Technology, Integration, and Operations Conference (ATIO 2017), AIAA Aviation Forum, June 2017

    Google Scholar 

  3. McDermid, J., Jia, Y., Habli, I.: Towards a framework for safety assurance of autonomous systems. In: Espinoza, H., et al. (eds.) 2019 AAAI Workshop on Artificial Intelligence Safety (SafeAI 2019), CEUR Workshop Proceedings, January 2019

    Google Scholar 

  4. Denney, E., Habli, I., Pai, G.: Dynamic safety cases for through-life safety assurance. In: IEEE/ACM 37th IEEE International Conference on Software Engineering (ICSE 2015), vol. 2, pp. 587–590, May 2015

    Google Scholar 

  5. Denney, E., Pai, G., Habli, I.: Towards measurement of confidence in safety cases. In: 5th International Symposium on Empirical Software Engineering and Measurement (ESEM 2011), pp. 380–383, September 2011

    Google Scholar 

  6. Wang, R., Guiochet, J., Motet, G., Schön, W.: Safety case confidence propagation based on Dempster-Shafer theory. Int. J. Approximate Reasoning 107, 46–64 (2019)

    Article  MathSciNet  Google Scholar 

  7. Asaadi, E., Denney, E., Pai, G.: Towards quantification of assurance for learning-enabled components. In: 15th European Dependable Computing Conference (EDCC 2019), pp. 55–62. IEEE, September 2019

    Google Scholar 

  8. Avižienis, A., Laprie, J.C., Randell, B., Landwehr, C.: Basic concepts and taxonomy of dependable and secure computing. IEEE Trans. Dependable Secure Comput. 1(1), 11–33 (2004)

    Article  Google Scholar 

  9. Denney, E., Pai, G.: Tool support for assurance case development. J. Autom. Softw. Eng. 25(3), 435–499 (2018)

    Article  Google Scholar 

  10. Hawkins, R., Kelly, T., Knight, J., Graydon, P.: A new approach to creating clear safety arguments. In Dale, C., Anderson, T. (eds.) Advances in Systems Safety, pp. 3–23 (2011)

    Google Scholar 

  11. Murphy, K.P.: Machine Learning: A Probabilistic Perspective. MIT Press, Cambridge (2012)

    MATH  Google Scholar 

  12. Najm, H.N.: Uncertainty quantification and polynomial chaos techniques in computational fluid dynamics. Annu. Rev. Fluid Mech. 41(1), 35–52 (2009)

    Article  MathSciNet  Google Scholar 

  13. Criminisi, A., Shotton, J., Konukoglu, E.: Decision forests: a unified framework for classification, regression, density estimation, manifold learning and semi-supervised learning. Found. Trends Comput. Graphics Vision 7(2–3), 81–227 (2012)

    MATH  Google Scholar 

  14. Kochenderfer, M.J.: Decision Making Under Uncertainty: Theory and Application. MIT Press, Boston (2015)

    Book  Google Scholar 

  15. Moosbrugger, P., Rozier, K.Y., Schumann, J.: R2U2: monitoring and diagnosis of security threats for unmanned aerial systems, pp. 1–31, April 2017

    Google Scholar 

  16. Calinescu, R., Weyns, D., Gerasimou, S., Iftikhar, M.U., Habli, I., Kelly, T.: Engineering trustworthy self-adaptive software with dynamic assurance cases. IEEE Trans. Software Eng. 44(11), 1039–1069 (2018)

    Article  Google Scholar 

  17. Ivanov, R., Weimer, J., Alur, R., Pappas, G.J., Lee, I.: Verisig: verifying safety properties of hybrid systems with neural network controllers. In: 22nd ACM International Conference on Hybrid Systems: Computation and Control, HSCC 2019, pp. 169–178 (2019)

    Google Scholar 

  18. Trapp, M., Schneider, D., Weiss, G.: Towards safety-awareness and dynamic safety management. In: 14th European Dependable Computing Conference, EDCC 2018, pp. 107–111, September 2018

    Google Scholar 

  19. Bencomo, N., Garcia-Paucar, L.H.: RaM: causally-connected and requirements-aware runtime models using Bayesian learning. In: 22nd IEEE/ACM International Conference on Model Driven Engineering Languages and Systems, MODELS 2019, September 2019

    Google Scholar 

  20. Bouton, M., Karlsson, J., Nakhaei, A., Fujimura, K., Kochenderfer, M.J., Tumova, J.: Reinforcement learning with probabilistic guarantees for autonomous driving. Computing Research Repository (CoRR) arXiv:1904.07189v2 [cs.RO], May 2019

  21. Henne, M., Schwaiger, A., Roscher, K., Weiss, G.: Benchmarking uncertainty estimation methods for deep learning with safety-related metrics. In: Espinoza, H., et al. (eds.) 2020 AAAI Workshop on Artificial Intelligence Safety (SafeAI 2020), CEUR Workshop Proceedings, vol. 2560, pp. 83–90, February 2020

    Google Scholar 

  22. Denney, E., Pai, G., Whiteside, I.: The role of safety architectures in aviation safety cases. Reliab. Eng. Syst. Saf. 191, 106502 (2019)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ganesh Pai .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Asaadi, E., Denney, E., Pai, G. (2020). Quantifying Assurance in Learning-Enabled Systems. In: Casimiro, A., Ortmeier, F., Bitsch, F., Ferreira, P. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2020. Lecture Notes in Computer Science(), vol 12234. Springer, Cham. https://doi.org/10.1007/978-3-030-54549-9_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-54549-9_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-54548-2

  • Online ISBN: 978-3-030-54549-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics