Advertisement

Using Honeypots in a Decentralized Framework to Defend Against Adversarial Machine-Learning Attacks

  • Fadi Younis
  • Ali MiriEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11605)

Abstract

The market demand for online machine-learning services is increasing, and so have the threats against them. Adversarial inputs represent a new threat to Machine-Learning-as-a-Services (MLaaSs). Meticulously crafted malicious inputs can be used to mislead and confuse the learning model, even in cases where the adversary only has limited access to input and output labels. As a result, there has been an increased interest in defence techniques to combat these types of attacks. In this paper, we propose a network of High-Interaction Honeypots (HIHP) as a decentralized defence framework that prevents an adversary from corrupting the learning model. We accomplish our aim by (1) preventing the attacker from correctly learning the labels and approximating the architecture of the black-box system; (2) luring the attacker away, towards a decoy model, using Adversarial HoneyTokens; and finally (3) creating infeasible computational work for the adversary.

Keywords

Adversarial machine learning Deception-as-a-defence Exploratory attacks Evasion attacks High-interaction honeypots Honey-tokens 

References

  1. 1.
    Adel Karimi: honeybits. https://github.com/0x4D31/honeybits. Accessed 27 Mar 2019
  2. 2.
    Akiyama, M., Yagi, T., Hariu, T., Kadobayashi, Y.: HoneyCirculator: distributing credential honeytoken for introspection of web-based attack cycle. Int. J. Inf. Secur. 17(2), 135–151 (2018)CrossRefGoogle Scholar
  3. 3.
    Carlini, N., Wagner, D.: Adversarial examples are not easily detected: bypassing ten detection methods. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AISec 2017), pp. 3–14 (2017)Google Scholar
  4. 4.
    Dowling, S., Schukat, M., Melvin, H.: A ZigBee honeypot to assess IoT cyberattack behaviour. In: Proceedings of the 2017 28th Irish Signals and Systems Conference (ISSC), pp. 1–6, June 2017Google Scholar
  5. 5.
    Egupov, A.A., Zareshin, S.V., Yadikin, I.M., Silnov, D.S.: Development and implementation of a Honeypot-trap. In: Proceedings of IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering, pp. 382–385 (2017)Google Scholar
  6. 6.
    Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations 2017, p. 11 (2015)Google Scholar
  7. 7.
    Guarnizo, J.D., et al.: SIPHON: towards scalable high-interaction physical honeypots. In: Proceedings of the 3rd ACM Workshop on Cyber-Physical System Security (CPSS 2017), pp. 57–68 (2017)Google Scholar
  8. 8.
    Hosseini, H., Chen, Y., Kannan, S., Zhang, B., Poovendran, R.: Blocking transferability of adversarial examples in black-box learning systems. arXiv:1703.04318 (2017)
  9. 9.
    Huang, L., Joseph, A.D., Nelson, B., Rubinstein, B.I., Tygar, J.D.: Adversarial machine learning. In: Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence (AISec 2011), pp. 43–58 (2011)Google Scholar
  10. 10.
    Irvene, C., Formby, D., Litchfield, S., Beyah, R.: HoneyBot: a honeypot for robotic systems. Proc. IEEE 106(1), 61–70 (2018)CrossRefGoogle Scholar
  11. 11.
    Kedrowitsch, A., Yao, D.D., Wang, G., Cameron, K.: A first look: using Linux containers for deceptive honeypots. In: Proceedings of the 2017 Workshop on Automated Decision Making for Active Cyber Defense (SafeConfig 2017), pp. 15–22 (2017)Google Scholar
  12. 12.
    Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. In: International Conference on Learning Representations (ICLR), p. 14 (2017)Google Scholar
  13. 13.
    Lihet, M.A., Dadarlat, V.: How to build a honeypot system in the cloud. In: Proceedings of the 2015 14th RoEduNet International Conference - Networking in Education and Research (RoEduNet NER), pp. 190–194, September 2015Google Scholar
  14. 14.
    Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. In: Proceedings of the International Conference on Learning Representations, p. 14 (2017)Google Scholar
  15. 15.
    Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 427–436 (2015)Google Scholar
  16. 16.
    Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: Proceedings of the 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597, May 2016Google Scholar
  17. 17.
    Papernot, N., McDaniel, P., Goodfellow, I.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv:1605.07277 [cs], May 2016
  18. 18.
    Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security (ASIA CCS 2017), pp. 506–519 (2017)Google Scholar
  19. 19.
    Rauti, S., Leppänen, V.: A survey on fake entities as a method to detect and monitor malicious activity. In: Proceedings of the 25th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP), pp. 386–390 (2017)Google Scholar
  20. 20.
    Ribeiro, M., Grolinger, K., Capretz, M.A.M.: MLaaS: machine learning as a service. In: Proceedings of the 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), pp. 896–902, December 2015Google Scholar
  21. 21.
    Rozsa, A., Gunther, M., Boult, T.E.: Towards robust deep neural networks with BANG. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV) 2018, November 2016Google Scholar
  22. 22.
    Soule, N., Pal, P., Clark, S., Krisler, B., Macera, A.: Enabling defensive deception in distributed system environments. In: Resilience Week (RWS), pp. 73–76 (2016)Google Scholar
  23. 23.
    Suo, X., Han, X., Gao, Y.: Research on the application of honeypot technology in intrusion detection systems. In: Proceedings of the IEEE Workshop on Advanced Research and Technology in Industry Applications, pp. 1030–1032, September 2014Google Scholar
  24. 24.
    Tramèr, F., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: The space of transferable adversarial examples. arXiv:1704.03453 [cs, stat], April 2017
  25. 25.
    Xiao, Q., Li, K., Zhang, D., Xu, W.: Security risks in deep learning implementations. arXiv:1711.11008 [cs], November 2017
  26. 26.
    Yuan, X., He, P., Zhu, Q., Bhat, R.R., Li, X.: Adversarial examples: attacks and defenses for deep learning. arXiv:1712.07107 [cs, stat], December 2017
  27. 27.
    Zantedeschi, V., Nicolae, M.I., Rawat, A.: Efficient defenses against adversarial attacks. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AISec 2017), pp. 39–49 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Computer ScienceRyerson UniversityTorontoCanada

Personalised recommendations