Skip to main content

Decision-based evasion attacks on tree ensemble classifiers

Abstract

Learning-based classifiers are found to be susceptible to adversarial examples. Recent studies suggested that ensemble classifiers tend to be more robust than single classifiers against evasion attacks. In this paper, we argue that this is not necessarily the case. In particular, we show that a discrete-valued random forest classifier can be easily evaded by adversarial inputs manipulated based only on the model decision outputs. The proposed evasion algorithm is gradient free and can be fast implemented. Our evaluation results demonstrate that random forests can be even more vulnerable than SVMs, either single or ensemble, to evasion attacks under both white-box and the more realistic black-box settings.

This is a preview of subscription content, access via your institution.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7
Figure 8
Figure 9
Figure 10
Figure 11
Figure 12
Figure 13

Notes

  1. http://pdfrate.com/

  2. http://www.gurobi.com

References

  1. Androutsopoulos, I., Paliouras, G., Michelakis, E: Learning to Filter Unsolicited Commercial E-mail. “DEMOKRITOS” National Center for Scientific Research (2004)

  2. Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: International Conference on Machine Learning, pp 274–283 (2018)

  3. Biggio, B., Roli, F.: Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recogn. 84, 317–331 (2018)

    Article  Google Scholar 

  4. Biggio, B., Fumera, G., Roli, F.: Multiple classifier systems for robust classifier design in adversarial environments. Int. J. Mach. Learn. Cybern. 1(1-4), 27–41 (2010)

    Article  Google Scholar 

  5. Biggio, B., Corona, I., Maiorca, D., Nelson, B., ŠrndiĆ, N., Laskov, P., Giacinto, G., Roli, F.: Evasion attacks against machine learning at test time. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp 387–402 (2013)

  6. Brendel, W., Rauber, J., Bethge, M.: Decision-Based Adversarial Attacks: Reliable Attacks against Black-Box Machine Learning Models. In: International Conference on Learning Representations (2018)

  7. Calzavara, S., Lucchese, C., Tolomei, G., Abebe, S.A., Orlando, S.: Treant:, Training evasion-aware decision trees. arXiv:1907.01197 (2019)

  8. Carlini, N., Wagner, D.: Towards Evaluating the Robustness of Neural Networks. In: IEEE Symposium on Security and Privacy, pp 39–57 (2017)

  9. Carlini, N., Athalye, A., Papernot, N., Brendel, W., Rauber, J., Tsipras, D., Goodfellow, I., Madry, A.: On evaluating adversarial robustness. arXiv:1902.06705 (2019)

  10. Chang, C.C., Lin, C.J.: Libsvm: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(3), 1–27 (2011)

    Article  Google Scholar 

  11. Cheng, M., Le, T., Chen, P.Y., Zhang, H., Yi, J., Hsieh, C.J.: Query-efficient hard-label black-box attack: an optimization-based approach. In: International Conference on Learning Representation (2019)

  12. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015)

  13. Ho, T.K.: The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 832–844 (1998)

    Article  Google Scholar 

  14. Ho, T.K.: A data complexity analysis of comparative advantages of decision forest constructors. Pattern Analysis & Applications 5(2), 102–112 (2002)

    MathSciNet  Article  Google Scholar 

  15. Huang, L., Joseph, A.D., Nelson, B., Rubinstein, B.I., Tygar, J.: Adversarial machine learning. In: ACM Workshop on Security and Artificial Intelligence, pp 43–58 (2011)

  16. Ilyas, A., Engstrom, L., Athalye, A., Lin, J.: Black-Box Adversarial Attacks with Limited Queries and Information. In: International Conference on Machine Learning, pp 2137–2146 (2018)

  17. Kantchelian, A., Tygar, J., Joseph, A.: Evasion and hardening of tree ensemble classifiers. In: International Conference on Machine Learning, pp 2387–2396 (2016)

  18. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial machine learning at scale. In: International Conference on Learning Representations (2017)

  19. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  20. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: Deepfool: a Simple and Accurate Method to Fool Deep Neural Networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 2574–2582 (2016)

  21. Mujtaba, G., Shuib, L., Raj, R.G., Majeed, N., Al-Garadi, M.A.: Email classification research trends: review and open issues. IEEE Access 5, 9044–9064 (2017)

    Article  Google Scholar 

  22. Papernot, N., McDaniel, P., Sinha, A., Wellman, M.: Towards the science of security and privacy in machine learning. arXiv:1611.03814 (2016)

  23. Smutz, C., Stavrou, A.: When a tree falls: using diversity in ensemble classifiers to identify evasion in malware detectors. In: Network and Distributed System Security Symposium (2016)

  24. Šrndic, N., Laskov, P.: Practical evasion of a learning-based classifier: a case study. In: IEEE Symposium on Security and Privacy, pp 197–211 (2014)

  25. Wu, L., Zhu, Z., Tai, C., et al.: Understanding and enhancing the transferability of adversarial examples. arXiv:1802.09707 (2018)

  26. Zhang, F., Chan, P.P., Biggio, B., Yeung, D.S., Roli, F.: Adversarial feature selection against evasion attacks. IEEE Trans. Cybern. 46(3), 766–777 (2016)

    Article  Google Scholar 

  27. Zhang, F.Y., Wang, Y., Wang, H.: Gradient Correlation: are ensemble classifiers more robust against evasion attacks in practical settings?. In: International Conference on Web Information Systems Engineering. pp. 96–110 (2018)

  28. Zhou, Z.H.: Ensemble methods: foundations and algorithms. CRC Press, Boca Raton (2012)

    Book  Google Scholar 

Download references

Acknowledgements

This work was supported in part by National Natural Science Foundation of China (61876038) and in part by Dongguan University of Technology (KCYKYQD2017003).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yi Wang.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article belongs to the Topical Collection: Special Issue on Web Information Systems Engineering 2018

Guest Editors: Hakim Hacid, Wojciech Cellary, Hua Wang and Yanchun Zhang

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zhang, F., Wang, Y., Liu, S. et al. Decision-based evasion attacks on tree ensemble classifiers. World Wide Web 23, 2957–2977 (2020). https://doi.org/10.1007/s11280-020-00813-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11280-020-00813-y

Keywords

  • Adversarial machine learning
  • Tree ensemble classifiers
  • Evasion attacks