Tackling Uncertainty in Safety Assurance for Machine Learning: Continuous Argument Engineering with Attributed Tests
There are unique kinds of uncertainty in implementations constructed by machine learning from training data. This uncertainty affects the strategy and activities for safety assurance. In this paper, we investigate this point in terms of continuous argument engineering with a granular performance evaluation over the expected operational domain. We employ an attribute testing method for evaluating an implemented model in terms of explicit (partial) specification. We then show experimental results that demonstrate how safety arguments are affected by the uncertainty of machine learning. As an example, we show the weakness of a model, which cannot be predicted beforehand. We show our tool for continuous argument engineering to track the latest state of assurance.
- 1.Bishop, P., Bloomfield, R.: A methodology for safety case development. In: Safety-Critical Systems Symposium (SSS 98) (1998)Google Scholar
- 2.Gauerhof, L., Munk, P., Burton, S.: Structuring validation targets of a machine learning function applied to automated driving. In: Gallina, B., Skavhaug, A., Bitsch, F. (eds.) SAFECOMP 2018. LNCS, vol. 11093, pp. 45–58. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99130-6_4CrossRefGoogle Scholar
- 4.Ishikawa, F.: Concepts in quality assessment for machine learning - from test data to arguments. In: The 37th International Conference on Conceptual Modeling (ER 2018), October 2018Google Scholar
- 5.Ishikawa, F., Matsuno, Y.: Continuous argument engineering: tackling uncertainty in machine learning based systems. In: The 6th International Workshop on Assurance Cases for Software-intensive Systems (ASSURE 2018), pp. 14–21, September 2018Google Scholar
- 6.Matsuno, Y.: D-Case Communicator Web Page. http://mlab.ce.cst.nihon-u.ac.jp/project/dcomm/