Abstract
Ensuring safety and explainability of machine learning (ML) is a topic of increasing relevance as data-driven applications venture into safety-critical application domains, traditionally committed to high safety standards that are not satisfied with an exclusive testing approach of otherwise inaccessible black-box systems. Especially the interaction between safety and security is a central challenge, as security violations can lead to compromised safety. The contribution of this paper to addressing both safety and security within a single concept of protection applicable during the operation of ML systems is active monitoring of the behavior and the operational context of the data-driven system based on distance measures of the Empirical Cumulative Distribution Function (ECDF). We investigate abstract datasets (XOR, Spiral, Circle) and current security-specific datasets for intrusion detection (CICIDS2017) of simulated network traffic, using distributional shift detection measures including the Kolmogorov-Smirnov, Kuiper, Anderson-Darling, Wasserstein and mixed Wasserstein-Anderson-Darling measures. Our preliminary findings indicate that there is a meaningful correlation between ML decisions and the ECDF-based distances measures of the input features. Thus, they can provide a confidence level that can be used for a) analyzing the applicability of the ML system in a given field (safety/security) and b) analyzing if the field data was maliciously manipulated. (Our preliminary code and results are available at https://github.com/ISorokos/SafeML.)
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Alexander, R., et al.: Safety assurance objectives for autonomous systems (2020)
Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., Mané, D.: Concrete problems in AI safety. arXiv preprint arXiv:1606.06565 (2016)
Aslansefat, K.: How to make your classifier safe (2020). https://towardsdatascience.com/how-to-make-your-classifier-safe-46d55f39f1ad
Aslansefat, K., Gogani, M.B., Kabir, S., Shoorehdeli, M.A., Yari, M.: Performance evaluation and design for variable threshold alarm systems through semi-Markov process. ISA Trans. 97, 282–295 (2020)
Bellemare, M.G., et al.: The Cramer distance as a solution to biased Wasserstein gradients. arXiv preprint arXiv:1705.10743 (2017)
Burton, S., Habli, I., Lawton, T., McDermid, J., Morgan, P., Porter, Z.: Mind the gaps: assuring the safety of autonomous systems from an engineering, ethical, and legal perspective. Artif. Intell. 279, 103201 (2020)
Davenport, T.H., Brynjolfsson, E., McAfee, A., Wilson, H.J.: Artificial Intelligence: The Insights You Need from Harvard Business Review. Harvard Business Press, Boston (2019)
Deza, M.M., Deza, E.: Distances in probability theory. Encyclopedia of Distances, pp. 257–272. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-44342-2_14
Du-Harpur, X., Watt, F., Luscombe, N., Lynch, M.: What is AI? Applications of artificial intelligence to dermatology. Br. J. Dermatol. 1–8 (2020). https://doi.org/10.1111/bjd.18880
Finlay, C., Oberman, A.M.: Empirical confidence estimates for classification by deep neural networks. arXiv preprint arXiv:1903.09215 (2019)
Fukunaga, K.: Introduction to Statistical Pattern Recognition. Elsevier, Amsterdam (2013)
Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing systems, pp. 2672–2680 (2014)
Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of Wasserstein GANs. In: Advances in Neural Information Processing Systems, pp. 5767–5777 (2017)
Hobbhahn, M., Kristiadi, A., Hennig, P.: Fast predictive uncertainty for classification with Bayesian deep networks. arXiv preprint arXiv:2003.01227 (2020)
ISO: Iso/iec jtc 1/sc 42: Artificial intelligence (2017). https://www.iso.org/committee/6794475.html. Accessed 10 May 2020
Kabir, S., et al.: A runtime safety analysis concept for open adaptive systems. In: Papadopoulos, Y., Aslansefat, K., Katsaros, P., Bozzano, M. (eds.) IMBSA 2019. LNCS, vol. 11842, pp. 332–346. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32872-6_22
Kläs, M., Sembach, L.: Uncertainty wrappers for data-driven models. In: Romanovsky, A., Troubitsyna, E., Gashi, I., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2019. LNCS, vol. 11699, pp. 358–364. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-26250-1_29
Nielsen, F.: The chord gap divergence and a generalization of the Bhattacharyya distance. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2276–2280. IEEE (2018)
Panigrahi, R., Borah, S.: A detailed analysis of cicids2017 dataset for designing intrusion detection systems. Int. J. Eng. Technol. 7(3.24), 479–482 (2018)
U.C. on Standards in Public Life: Artificial intelligence and public standards (2020). https://www.gov.uk/government/publications/artificial-intelligence-and-public-standards-report. Accessed 10 May 2020
Quionero-Candela, J., Sugiyama, M., Schwaighofer, A., Lawrence, N.D.: Dataset Shift in Machine Learning. The MIT Press (2009)
Raschke, M.: Empirical behaviour of tests for the beta distribution and their application in environmental research. Stochast. Environ. Res. Risk Assess. 25(1), 79–89 (2011)
Schulam, P., Saria, S.: Can you trust this prediction? auditing pointwise reliability after learning. arXiv preprint arXiv:1901.00403 (2019)
Sharafaldin, I., Lashkari, A.H., Ghorbani, A.A.: Toward generating a new intrusion detection dataset and intrusion traffic characterization. In: International Conference on Information Systems Security and Privacy (ICISSP), pp. 108–116 (2018). https://doi.org/10.5220/0006639801080116
Sharkey, A.: Autonomous weapons systems, killer robots and human dignity. Ethics Inf. Technol. 21(2), 75–87 (2018). https://doi.org/10.1007/s10676-018-9494-0
Theodoridis, S., Koutroumbas, K.: Pattern Recognition. Elsevier, New York (2009)
Van Der Maaten, L.: Accelerating t-SNE using tree-based algorithms. J. Mach. Learn. Res. 15(1), 3221–3245 (2014)
Wiens, J., et al.: Do no harm: a roadmap for responsible machine learning for health care. Nat. Med. 25(9), 1337–1340 (2019)
Acknowledgements
This work was supported by the DEIS H2020 Project under Grant 732242. We would like to thank EDF Energy R&D UK Centre, AURA Innovation Centre and the University of Hull for their support.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Aslansefat, K., Sorokos, I., Whiting, D., Tavakoli Kolagari, R., Papadopoulos, Y. (2020). SafeML: Safety Monitoring of Machine Learning Classifiers Through Statistical Difference Measures. In: Zeller, M., Höfig, K. (eds) Model-Based Safety and Assessment. IMBSA 2020. Lecture Notes in Computer Science(), vol 12297. Springer, Cham. https://doi.org/10.1007/978-3-030-58920-2_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-58920-2_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58919-6
Online ISBN: 978-3-030-58920-2
eBook Packages: Computer ScienceComputer Science (R0)