Abstract
Adversarial machine learning algorithms deal with adversarial sample generation which is creating false input data that are capable enough to fool any machine learning model. For instance, attributes of a goodware can be added to a malware executable to make the classifier identify a malicious sample as benign. As the name suggests, “adversary” means opponent or enemy. If you are thinking what an enemy has got to do in machine learning, this chapter will take you through how vulnerable machine learning models are and how easily they can misunderstand during the learning process. If any set of input data when given to a machine learning model gets misclassified, we call them as adversarial samples.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Thomas, T., P. Vijayaraghavan, A., Emmanuel, S. (2020). Adversarial Machine Learning in Cybersecurity. In: Machine Learning Approaches in Cyber Security Analytics. Springer, Singapore. https://doi.org/10.1007/978-981-15-1706-8_10
Download citation
DOI: https://doi.org/10.1007/978-981-15-1706-8_10
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-1705-1
Online ISBN: 978-981-15-1706-8
eBook Packages: Computer ScienceComputer Science (R0)