Skip to main content

Machine Learning Safety

  • Textbook
  • © 2023


  • Provides a comprehensive and thorough investigation on safety concerns regarding machine learning
  • Shows readers to identify vulnerabilities in machine learning models and to improve the models in the training process
  • Demonstrates formal verification approaches used to identify vulnerabilities in machine learning models

This is a preview of subscription content, log in via an institution to check access.

Access this book

eBook USD 16.99 USD 59.99
Discount applied Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book USD 79.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

About this book

Machine learning algorithms allow computers to learn without being explicitly programmed. Their application is now spreading to highly sophisticated tasks across multiple domains, such as medical diagnostics or fully autonomous vehicles. While this development holds great potential, it also raises new safety concerns, as machine learning has many specificities that make its behaviour prediction and assessment very different from that for explicitly programmed software systems. This book addresses the main safety concerns with regard to machine learning, including its susceptibility to environmental noise and adversarial attacks. Such vulnerabilities have become a major roadblock to the deployment of machine learning in safety-critical applications. The book presents up-to-date techniques for adversarial attacks, which are used to assess the vulnerabilities of machine learning models; formal verification, which is used to determine if a trained machine learning model is free of vulnerabilities; and adversarial training, which is used to enhance the training process and reduce vulnerabilities.

 The book aims to improve readers’ awareness of the potential safety issues regarding machine learning models. In addition, it includes up-to-date techniques for dealing with these issues, equipping readers with not only technical knowledge but also hands-on practical skills.

Similar content being viewed by others


Table of contents (17 chapters)

  1. Safety Properties

  2. Safety Threats

  3. Safety Solutions

  4. Extended Safety Solutions

Authors and Affiliations

  • University of Liverpool, Liverpool, UK

    Xiaowei Huang, Gaojie Jin

  • University of Exeter, Exeter, UK

    Wenjie Ruan

About the authors

Xiaowei Huang is currently a Reader of Computer Science and Director of the Autonomous Cyber-Physics Systems lab at the University of Liverpool (UoL). His research is concerned with the development of automated verification techniques that ensure the correctness and reliability of intelligent systems. He has published more than 80 papers, primarily in leading conference proceedings and journals in the fields of Artificial Intelligence (e.g. Artificial Intelligence Journal, ACM Transactions on Computational Logics, NeurIPS, AAAI, IJCAI, ECCV), Formal Verification (e.g. CAV, TACAS, and Theoretical Computer Science) and Software Engineering (e.g. IEEE Transactions on Reliability, ICSE and ASE). He has been invited to give talks at several leading conferences, discussing topics related to the safety and security of applying machine learning algorithms to critical applications. He has co-chaired the AAAI and IJCAI workshop series on Artificial Intelligence Safety and been the PI or co-PI ofseveral Dstl (Ministry of Defence, UK), EPSRC and EU H2020 projects. He is the Director of the Autonomous Cyber Physical Systems Lab at Liverpool. 

Wenjie Ruan is a Senior Lecturer of Data Science at the University of Exeter, UK. His research interests lie in the adversarial robustness of deep neural networks, and in machine learning and its applications in safety-critical systems, including health data analytics and human-centered computing. His series of research works on Device-free Human Localization and Activity Recognition for Supporting the Independent Living of the Elderly garnered him a Doctoral Thesis Excellence Award from the University of Adelaide, Best Research Poster Award at the 9th ACM International Workshop on IoT and Cloud Computing, and Best Student Paper Award at the 14th International Conference on Advanced Data Mining and Applications. He was also the recipient of a prestigious DECRA fellowship from the Australian Research Council. Dr. Ruan has published more than 40 papers in international conference proceedings such as AAAI, IJCAI, SIGIR, WWW, ICDM, UbiComp, CIKM, and ASE. Dr. Ruan has served as a senior PC, PC member or invited reviewer for over 10 international conferences, including IJCAI, AAAI, ICML, NeurIPS, CVPR, ICCV, AAMAS, ECML-PKDD, etc. He is the Director of the Exeter Trustworthy AI Lab at the University of Exeter. 

Bibliographic Information

  • Book Title: Machine Learning Safety

  • Authors: Xiaowei Huang, Gaojie Jin, Wenjie Ruan

  • Series Title: Artificial Intelligence: Foundations, Theory, and Algorithms

  • DOI:

  • Publisher: Springer Singapore

  • eBook Packages: Computer Science, Computer Science (R0)

  • Copyright Information: The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023

  • Hardcover ISBN: 978-981-19-6813-6Published: 29 April 2023

  • Softcover ISBN: 978-981-19-6816-7Published: 17 May 2024

  • eBook ISBN: 978-981-19-6814-3Published: 28 April 2023

  • Series ISSN: 2365-3051

  • Series E-ISSN: 2365-306X

  • Edition Number: 1

  • Number of Pages: XVII, 321

  • Number of Illustrations: 1 b/w illustrations

  • Topics: Machine Learning, Systems and Data Security, Artificial Intelligence

Publish with us