Skip to main content

Countering Adversarial Inference Evasion Attacks Towards ML-Based Smart Lock in Cyber-Physical System Context

  • Conference paper
  • First Online:
Cybersecurity, Privacy and Freedom Protection in the Connected World

Abstract

Machine Learning (ML) has been taking significant evolutionary steps and provided sophisticated means in developing novel and smart, up-to-date applications. However, the development has also brought new types of hazards into the daylight that can have even destructive consequences required to be addressed. Evasion attacks are among the most utilized attacks that can be generated in adversarial settings during the system operation. In assumption, ML environment is benign, but in reality, perpetrators may exploit vulnerabilities to conduct these gradient-free or gradient-based malicious adversarial inference attacks towards cyber-physical systems (CPS), such as smart buildings. Evasion attacks provide a utility for perpetrators to modify, for example, a testing dataset of a victim ML-model. In this article, we conduct a literature review concerning evasion attacks and countermeasures and discuss how these attacks can be utilized in order to deceive the, i.e., CPS smart lock system’s ML-classifier to gain access to the smart building.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Alzantot M, Sharma Y, Chakraborty S, Zhang H, Hsieh C-J, Srivastava MB (2019) GenAttack: practical black-box attacks with gradient-free optimization (2019). arXiv:1805.11090v3 [cs.LG]

  2. Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. arXiv:1608.04644v2 [cs.CR]

  3. Co KT (2018) Bayesian optimization for black-box evasion of machine learning systems. Imperial College London, Department of Computing

    Google Scholar 

  4. Coppola A, Stewardt BM (2014) LBFGS: efficient L-BFGS and OWL-QN optimization in R. http://cran.csiro.au/web/packages/lbfgs/vignettes/Vignette.pdf. Accessed 26 Aug 2020

  5. De Groot J (2020) What is cyber security? Definition, best practices & more. Data Insider. https://digitalguardian.com/blog/what-cyber-security. Accessed 17 Aug 2020

  6. DeepAI, What is defensive distillation? https://deepai.org/machine-learning-glossary-and-terms/defensive-distillation. Accessed 9 Oct 2019

  7. Dewar RS (2014) The “Triptych of Cyber Security”: a classification of active cyber defence. In: 6th international conference on cyber conflict. NATO CCD COE Publications, Tallinn. https://www.ccdcoe.org/uploads/2018/10/d1r1s9_dewar.pdf. Accessed 17 Aug 2020

  8. Gartner: Cybersecurity (2020). https://www.gartner.com/en/information-technology/glossary/cybersecurity. Accessed 2 Sept 2020

  9. Goodfellow I, McDaniel P, Papernot N (2018) Making machine learning robust against adversarial inputs. Commun ACM 61(7):56–66

    Article  Google Scholar 

  10. Ibitoye O, Abou-Khamis R, Matrawy A, Shafix MO (2019) The threat of adversarial attacks on machine learning in network security—a survey. arXiv:1911.02621v1 [cs.CR]

  11. Jordan MI, Mitchell TM (2015) Machine learning: trends, perspectives, and prospects. Sciencemag.org. Science 349(6245)

    Google Scholar 

  12. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553), 436–444

    Google Scholar 

  13. Lipton ZC, Berkowitz J, Elkan C (2015). A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:1506.00019

  14. Liu C, Ye D, Shang Y, Jiang S, Li S, Mei Y, Wang L (2020) Defend against adversarial samples by using perceptual hash. Comput Materials Continua (CMC) 62(3):1365–1386. https://doi.org/10.32605/cmc.2020.07421

  15. Liu J, Zhang W, Yu N (2018) CAAD 2018: iterative ensemble adversarial attack. arXiv:1811.03456 [cs.CV]

  16. Liu J, Zhang W, Zhang Y, Hou D, Liu Y, Zha H, Yu N (2018) Detection based defense against adversarial examples from the steganalysis point of view. arXiv:1806.09186v2 [cs.CV]

  17. Loison A, Combey T, Hajri H (2020) Probabilistic Jacobian-based saliency maps attacks. arXiv:2007.06032 [cs.CV]

  18. Luo Y, Boix X, Roig G, Poggio T, Zhao O (2015) Foveation-based mechanisms alleviate adversarial examples. arXiv:1511.06292

  19. Ma S, Liu Y, Tao G, Lee WC, Zhang X (2019) NIC: detecting adversarial samples with neural network invariant checking. In: Network and distributed systems security (NDSS) symposium 2019, San Diego, CA, USA. https://doi.org/10.14722/ndss.2019.23415

  20. Moisejevs I (2019) Evasion attacks on machine learning (or “adversarial examples”). In: Towards data science. https://towardsdatascience.com/evasion-attacks-on-machine-learning-or-adversarial-examples-12f2283e06a1. Accessed 30 July 2020

  21. Moosavi-Dezfooli SM, Fawzi A, Fawzi O, Fossard P (2017) Universal adversarial perturbations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1765–1773

    Google Scholar 

  22. Okazaki N (2014) libLBFGS: a library of limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS). http://www.chokkan.org/software/liblbfgs. Accessed 26 Aug 2020

  23. Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2016) Practical black-box attacks against machine learning. arXiv:1602.02697 [cs.CR]

  24. Pawlak A (2020) Adversarial attacks for fooling deep neural networks. https://neurosys.com/article/adversarial-attacks-for-fooling-deep-neural-networks. Accessed 31 July 2020

  25. Ren K, Zheng T, Qin Z, Liu X (2020) Adversarial attacks and defences in deep learning. Engineering 6(3):346–360. https://doi.org/10.1016/j.eng.2019.12.012

    Article  Google Scholar 

  26. Samangouei P, Kabhab M, Chellappa R (2018) Defense-GAN: protecting classifiers against adversarial attacks using generative models. arXiv:1805.06605v2 [cs.CV]

  27. Sharif M, Bhagavatula S, Bauer L, Reiter MK (2016) Accessories to a crime: real and stealthy attacks on state-of-the-art face recognition. https://doi.org/10.1145/2976749.2978392

  28. Short A, Pay TL, Gandhi A (2019) Defending against adversarial examples. In: Sandia report, SAND 2019-11748. Sandia National Laboratories

    Google Scholar 

  29. Song D (2019) Plenary session—toward trustworthy machine learning. In: Proceedings of a workshop of robust machine learning algorithms and systems for detection and mitigation of adversarial attacks and anomalies, pp 35–38

    Google Scholar 

  30. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. arXiv:1312.6199

  31. TensorflowCore: Adversarial Example Using FGSM (2020). https://www.tensorflow.org/tutorials/generative/adversarial_fgsm. Accessed 30 July 2020

  32. Vorobeychik Y, Kantarcioglu M (2018) Adversarial machine learning synthesis lectures on artificial intelligence and machine learning. https://doi.org/10.2200/S00861ED1V01Y201806AIM039

  33. Wiyatno R, Xu A (2018) Maximal Jacobian-based saliency map attack. arXiv:1808.07945v1 [cs.LG]

  34. Wu H, Wang C, Tyshetskiy Y, Docherty A, Lu K, Zhu L (2019) Adversarial examples on graph data: deep insights into attack and defense. arXiv:1903.01610 [cs.LG]

  35. Yanagita Y, Yamamura M (2018) Gradient masking is a type of overfitting. Int J Mach Learn Comput 8(3):203–207. https://doi.org/10.18178/ijmlc.2018.8.3.688

  36. Yuan X, He P, Zhu Q, Bhat R, Li X (2017) Adversarial examples: attacks and defences for deep learning. arXiv:1712.07107 [cs.LG]

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Petri Vähäkainu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Vähäkainu, P., Lehto, M., Kariluoto, A. (2021). Countering Adversarial Inference Evasion Attacks Towards ML-Based Smart Lock in Cyber-Physical System Context. In: Jahankhani, H., Jamal, A., Lawson, S. (eds) Cybersecurity, Privacy and Freedom Protection in the Connected World. Advanced Sciences and Technologies for Security Applications. Springer, Cham. https://doi.org/10.1007/978-3-030-68534-8_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-68534-8_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-68533-1

  • Online ISBN: 978-3-030-68534-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics