Abstract
While the model evaluation methods in Chap. 2 can provide certain insights into the quality of machine learning models, their evaluations completely rely on the training and test datasets, without considering other factors that might potentially compromise the performance of a machine learning model. Actually, safety risks may appear at any stage of the lifecycle of a machine learning model. In recent years, the discussion on the potential risks of machine learning models has been very intense, and many risks have been discovered, even though the models have achieved excellent performance with respect to the traditional model evaluation methods. This chapter will discuss a set of safety and security properties.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
W. Ronny Huang, Jonas Geiping, Liam Fowl, Gavin Taylor, and Tom Goldstein. Metapoison: Practical general-purpose clean-label data poisoning. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 12080–12091. Curran Associates, Inc., 2020.
Wei Huang, Xingyu Zhao, and Xiaowei Huang. Embedding and extraction of knowledge in tree ensemble classifiers. Machine Learning, 2021.
Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu. Safety verification of deep neural networks. In International Conference on Computer Aided Verification, pages 3–29. Springer, 2017.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In In ICLR. Citeseer, 2014.
Min Wu, Matthew Wicker, Wenjie Ruan, Xiaowei Huang, and Marta Kwiatkowska. A game-based approximate verification of deep neural networks with provable guarantees. Theor. Comput. Sci., 807:298–329, 2020.
Yuheng Zhang, Ruoxi Jia, Hengzhi Pei, Wenxiao Wang, Bo Li, and Dawn Song. The secret revealer: Generative model-inversion attacks against deep neural networks. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 250–258. Computer Vision Foundation / IEEE, 2020.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Huang, X., Jin, G., Ruan, W. (2023). Safety and Security Properties. In: Machine Learning Safety. Artificial Intelligence: Foundations, Theory, and Algorithms. Springer, Singapore. https://doi.org/10.1007/978-981-19-6814-3_3
Download citation
DOI: https://doi.org/10.1007/978-981-19-6814-3_3
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-6813-6
Online ISBN: 978-981-19-6814-3
eBook Packages: Computer ScienceComputer Science (R0)