Advertisement

Applied Intelligence

, Volume 47, Issue 2, pp 558–569 | Cite as

Evaluation of random forest classifier in security domain

  • Zeinab Khorshidpour
  • Sattar Hashemi
  • Ali Hamzeh
Article

Abstract

There is an intrinsic adversarial nature in the security domain such as spam filtering and malware detection systems that attempt to mislead the detection system. This adversarial nature makes security applications different from the classical machine learning problems; for instance, an adversary (attacker) might change the distribution of test data and violate the data stationarity, a common assumption in machine learning techniques. Since machine learning methods are not inherently adversary-aware, a classifier designer should investigate the robustness of a learning system under attack. In this respect, recent studies have modeled the identified attacks against machine learning-based detection systems. Based on this, a classifier designer can evaluate the performance of a learning system leveraging the modeled attacks. Prior research explored a gradient-based approach in order to devise an attack against a classifier with differentiable discriminant function like SVM. However, there are several powerful classifiers with non-differentiable decision boundary such as Random Forest, which are commonly used in different security domain and applications. In this paper, we present a novel approach to model an attack against classifiers with non-differentiable decision boundary. In the experimentation, we first present an example that visually shows the effect of a successful attack on the MNIST handwritten digits classification task. Then we conduct experiments for two well-known applications in the security domain: spam filtering and malware detection in PDF files. The experimental results demonstrate that the proposed attack successfully evades Random Forest classifier and effectively degrades the classifier’s performance.

Keywords

Machine learning Security application Evasion attack Discriminant function Surrogate classifier 

Notes

Acknowledgments

The authors gratefully acknowledge Dr. Richard Wallace from Riverside Research for suggesting changes, reviewing, and editing the grammar and readability of this paper.

References

  1. 1.
    Warrender C, Forrest S, Pearlmutter B (1999) Detecting intrusions using system calls: Alternative data models Security and Privacy, 1999. Proceedings of the 1999 IEEE Symposium on, pp 133–145Google Scholar
  2. 2.
    Benferhat S, Boudjelida A, Tabia K, Drias H (2013) An intrusion detection and alert correlation approach based on revising probabilistic classifiers using expert knowledge. Appl Intell 38(4):520–540CrossRefGoogle Scholar
  3. 3.
    Baran A (2013) Stopping spam with sending session verification. Turk J Electr Eng Comput Sci 21(Sup. 2):2259–2268CrossRefGoogle Scholar
  4. 4.
    Khor K-C, Ting C-Y, Phon-Amnuaisuk S (2012) A cascaded classifier approach for improving detection rates on rare attack categories in network intrusion detection. Appl Intell 36(2):320–329CrossRefGoogle Scholar
  5. 5.
    Zico Kolter J, Maloof MA (2006) Learning to detect and classify Malicious executables in the wild. J Mach Learn Res 7:2721–2744MathSciNetzbMATHGoogle Scholar
  6. 6.
    Biggio B, Corona I, Maiorca D, Nelson B, Šxrndić N, Laskov P, Giacinto G, Roli F (2013) Evasion attacks against machine learning at test time Machine Learning and Knowledge Discovery in Databases. Springer, pp 387–402Google Scholar
  7. 7.
    Barreno M, Nelson B, Sears R, Joseph AD, Doug Tygar J (2006) Can machine learning be secure? Proceedings of the 2006 ACM Symposium on Information, computer and communications security, pp 16–25Google Scholar
  8. 8.
    Biggio B, Fumera G, Roli F (2014) Security evaluation of pattern classifiers under attack. IEEE Trans Knowl Data Eng 26(4):984–996CrossRefGoogle Scholar
  9. 9.
    Zhang F, Chan PPK, Biggio B, Yeung DS, Rolim F (2015) Adversarial feature selection against evasion attacksGoogle Scholar
  10. 10.
    Brückner M, Scheffer T (2011) Stackelberg games for adversarial prediction problems Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pp 547–555Google Scholar
  11. 11.
    Brückner M, Kanzow C, Scheffer T (2012) Static prediction games for adversarial learning problems. J Mach Learn Res 13(1):2617–2654MathSciNetzbMATHGoogle Scholar
  12. 12.
    Polikar R (2006) Ensemble based systems in decision making. IEEE Circuits Syst Mag 6(3):21–45CrossRefGoogle Scholar
  13. 13.
    Breiman L (2001) Random forests. Mach Learn 45(1):5–32CrossRefzbMATHGoogle Scholar
  14. 14.
    Zhu C, Byrd RH, Peihuang L, Nocedal J (1997) Algorithm 778: L-bfgs-b: Fortran subroutines for large-scale bound-constrained optimization. ACM Trans Math Softw (TOMS) 23(4):550–560MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Byrd RH, Peihuang L, Nocedal J, Zhu C (1995) A limited memory algorithm for bound constrained optimization. SIAM J Sci Comput 16(5):1190–1208MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Macdonald C, Ounis I, Soboroff I (2007) Overview of the trec 2007 blog track TREC, vol 7. Citeseer, pp 31–43Google Scholar
  17. 17.
    Maiorca D, Corona I, Giacinto G (2013) Looking at the bag is not enough to find the bomb: an evasion of structural methods for Malicious pdf files detection Proceedings of the 8th ACM SIGSAC symposium on Information, computer and communications security, pp 119–130Google Scholar
  18. 18.
    Maiorca D, Giacinto G, Corona I (2012) A pattern recognition system for malicious pdf files detection Machine Learning and Data Mining in Pattern Recognition. Springer, pp 510–524Google Scholar
  19. 19.
    Smutz C, Stavrou A (2012) Malicious pdf detection using metadata and structural features Proceedings of the 28th Annual Computer Security Applications Conference, pp 239–248Google Scholar
  20. 20.
    Ṡrndic N, Laskov P (2013) Detection of Malicious pdf files based on hierarchical document structure Proceedings of the 20th Annual Network & Distributed System Security SymposiumGoogle Scholar
  21. 21.
    Corona I, Maiorca D, Ariu D, Giacinto G (2014) Lux0r: Detection of Malicious pdf-embedded javascript code through discriminant analysis of api references Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop, pp 47– 57Google Scholar

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  • Zeinab Khorshidpour
    • 1
  • Sattar Hashemi
    • 1
  • Ali Hamzeh
    • 1
  1. 1.Department of Electronic and Computer EngineeringShiraz UniversityShirazIran

Personalised recommendations