Advertisement

Adversarial Examples for Malware Detection

  • Kathrin Grosse
  • Nicolas Papernot
  • Praveen Manoharan
  • Michael Backes
  • Patrick McDaniel
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10493)

Abstract

Machine learning models are known to lack robustness against inputs crafted by an adversary. Such adversarial examples can, for instance, be derived from regular inputs by introducing minor—yet carefully selected—perturbations.

In this work, we expand on existing adversarial example crafting algorithms to construct a highly-effective attack that uses adversarial examples against malware detection models. To this end, we identify and overcome key challenges that prevent existing algorithms from being applied against malware detection: our approach operates in discrete and often binary input domains, whereas previous work operated only in continuous and differentiable domains. In addition, our technique guarantees the malware functionality of the adversarially manipulated program. In our evaluation, we train a neural network for malware detection on the DREBIN data set and achieve classification performance matching state-of-the-art from the literature. Using the augmented adversarial crafting algorithm we then manage to mislead this classifier for 63% of all malware samples. We also present a detailed evaluation of defensive mechanisms previously introduced in the computer vision contexts, including distillation and adversarial training, which show promising results.

Notes

Acknowledgments

Nicolas Papernot is supported by a Google PhD Fellowship in Security. The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement no. 610150. This work was further partially supported by the German Federal Ministry of Education and Research (BMBF) through funding for the Center for IT-Security, Privacy and Accountability (CISPA) (FKZ: 16KIS0344). This research was also sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-13-2-0045 (ARL Cyber Security CRA). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes not with standing any copyright notation here on.

Supplementary material

References

  1. 1.
    Androutsopoulos, I., Koutsias, J., Chandrinos, K.V., Paliouras, G., Spyropoulos, C.D.: An evaluation of naive Bayesian anti-spam filtering. arXiv preprint arXiv:cs/0006013 (2000)
  2. 2.
    Arp, D., Spreitzenbarth, M., Hubner, M., Gascon, H., Rieck, K.: DREBIN: effective and explainable detection of android malware in your pocket. In: Proceedings of NDSS (2014)Google Scholar
  3. 3.
    Barreno, M., Nelson, B., Joseph, A.D., Tygar, J.D.: The security of machine learning. Mach. Learn. 81(2), 121–148 (2010)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Biggio, B., Corona, I., Maiorca, D., Nelson, B., Šrndić, N., Laskov, P., Giacinto, G., Roli, F.: Evasion attacks against machine learning at test time. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013. LNCS, vol. 8190, pp. 387–402. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-40994-3_25CrossRefGoogle Scholar
  5. 5.
    Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L.D., Monfort, M., Muller, U., Zhang, J., et al.: End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316 (2016)
  6. 6.
    Dahl, G.E., Stokes, J.W., Deng, L., Yu, D.: Large-scale malware classification using random projections and neural networks. In: Proceedings of the 2013 IEEE ICASSP, pp. 3422–3426 (2013)Google Scholar
  7. 7.
    Gong, Z., Wang, W., Ku, W.-S.: Adversarial and clean data are not twins. arXiv e-prints, April 2017Google Scholar
  8. 8.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)zbMATHGoogle Scholar
  9. 9.
    Goodfellow, I.J., et al.: Explaining and harnessing adversarial examples. In: Proceedings of ICLR 2015 (2015)Google Scholar
  10. 10.
    Grosse, K., Manoharan, P., Papernot, N., Backes, M., McDaniel, P.: On the (statistical) detection of adversarial examples. arXiv e-prints, February 2017Google Scholar
  11. 11.
    Gu, S., Rigazio, L.: Towards deep neural network architectures robust to adversarial examples. CoRR, abs/1412.5068 (2014)Google Scholar
  12. 12.
    Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv e-prints (2015)Google Scholar
  13. 13.
    Hosseini, H., Chen, Y., Kannan, S., Zhang, B., Poovendran, R.: Blocking transferability of adversarial examples in black-box learning systems. arXiv e-prints, March 2017Google Scholar
  14. 14.
    Hu, W., Tan, Y.: Generating adversarial malware examples for black-box attacks based on GAN. arXiv e-prints, February 2017Google Scholar
  15. 15.
    Alexander, G., Ororbia, I.I., Giles, C.L., Kifer, D.: Unifying adversarial training algorithms with flexible deep data gradient regularization. CoRR, abs/1601.07213 (2016)Google Scholar
  16. 16.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)Google Scholar
  17. 17.
    Krotov, D., Hopfield, J.J.: Dense associative memory is robust to adversarial inputs. arXiv e-prints, January 2017Google Scholar
  18. 18.
    Laskov, P., et al.: Practical evasion of a learning-based classifier: a case study. In: Proceedings of the 36th IEEE S&P, pp. 197–211 (2014)Google Scholar
  19. 19.
    Li, X., Li, F.: Adversarial examples detection in deep networks with convolutional filter statistics. CoRR, abs/1612.07767 (2016)Google Scholar
  20. 20.
    Lindorfer, M., Neugschwandtner, M., Platzer, C.: Marvin: efficient and comprehensive mobile app classification through static and dynamic analysis. In: Proceedings of the 39th Annual International Computers, Software and Applications Conference (COMPSAC) (2015)Google Scholar
  21. 21.
    Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. CoRR, abs/1611.02770 (2016)Google Scholar
  22. 22.
    Metzen, J.H., Genewein, T., Fischer, V., Bischoff, B.: On detecting adversarial perturbations. CoRR, abs/1702.04267 (2017)Google Scholar
  23. 23.
    Miyato, T., Dai, A.M., Goodfellow, I.J.: Virtual adversarial training for semi-supervised text classification. CoRR, abs/1605.07725 (2016)Google Scholar
  24. 24.
    Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., et al.: Practical black-box attacks against deep learning systems using adversarial examples. arXiv preprint arXiv:1602.02697 (2016)
  25. 25.
    Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: Proceedings of IEEE EuroS&P (2016)Google Scholar
  26. 26.
    Papernot, N., McDaniel, P., Sinha, A., Wellman, M.: Towards the science of security and privacy in machine learning. arXiv preprint arXiv:1611.03814 (2016)
  27. 27.
    Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: Proceedings of IEEE S&P (2015)Google Scholar
  28. 28.
    Shintre, S., Gardner, A.B., Feinman, R., Curtin, R.R.: Detecting adversarial samples from artifacts. CoRR, abs/1703.00410 (2017)Google Scholar
  29. 29.
    Rieck, K., Trinius, P., Willems, C., Holz, T.: Automatic analysis of malware behavior using machine learning. J. Comput. Secur. 19(4), 639–668 (2011)CrossRefGoogle Scholar
  30. 30.
    Rozsa, A., Günther, M., Boult, T.E.: Are accuracy and robustness correlated? arXiv e-prints, October 2016Google Scholar
  31. 31.
    Saxe, J., Berlin, K.: Deep neural network based malware detection using two dimensional binary program features. In: 10th International Conference on Malicious and Unwanted Software, MALWARE, pp. 11–20 (2015)Google Scholar
  32. 32.
    Sayfullina, L., Eirola, E., Komashinsky, D., Palumbo, P., Miché, Y., Lendasse, A., Karhunen, J.: Efficient detection of zero-day android malware using normalized Bernoulli naive Bayes. In: Proceedings of IEEE TrustCom, pp. 198–205 (2015)Google Scholar
  33. 33.
    Shabtai, A., Fledel, Y., Elovici, Y.: Automated static code analysis for classifying android applications using machine learning. In: CIS, pp. 329–333. IEEE (2010)Google Scholar
  34. 34.
    Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)CrossRefGoogle Scholar
  35. 35.
    Sommer, R., Paxson, V.: Outside the closed world: on using machine learning for network intrusion detection. In: 2010 IEEE S&P, pp. 305–316. IEEE (2010)Google Scholar
  36. 36.
    Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. In: Proceedings of ICLR. Computational and Biological Learning Society (2014)Google Scholar
  37. 37.
    Wang, Q., Guo, W., Alexander, G., Ororbia, I. I., Xing, X., Lin, L., Giles, C.L., Liu, X., Liu, P., Xiong, G.: Using non-invertible data transformations to build adversary-resistant deep neural networks. CoRR, abs/1610.01934 (2016)Google Scholar
  38. 38.
    Warde-Farley, D., Goodfellow, I.: Adversarial perturbations of deep neural networks. In: Hazan, T., Papandreou, G., Tarlow, D. (eds.) Advanced Structured Prediction (2016)Google Scholar
  39. 39.
    Zhu, Z., Dumitras, T.: Featuresmith: automatically engineering features for malware detection by mining the security literature. In: Proceedings of ACM SIGSAC, pp. 767–778 (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Kathrin Grosse
    • 1
  • Nicolas Papernot
    • 2
  • Praveen Manoharan
    • 1
  • Michael Backes
    • 1
  • Patrick McDaniel
    • 2
  1. 1.Saarland Informatics CampusCISPA, Saarland UniversitySaarbrückenGermany
  2. 2.School of Electrical Engineering and CSPennsylvania State UniversityState CollegeUSA

Personalised recommendations