Machine Learning

, Volume 81, Issue 1, pp 69–83 | Cite as

Mining adversarial patterns via regularized loss minimization

Article

Abstract

Traditional classification methods assume that the training and the test data arise from the same underlying distribution. However, in several adversarial settings, the test set is deliberately constructed in order to increase the error rates of the classifier. A prominent example is spam email where words are transformed to get around word based features embedded in a spam filter.

In this paper we model the interaction between a data miner and an adversary as a Stackelberg game with convex loss functions. We solve for the Nash equilibrium which is a pair of strategies (classifier weights, data transformations) from which there is no incentive for either the data miner or the adversary to deviate. Experiments on synthetic and real data demonstrate that the Nash equilibrium solution leads to solutions which are more robust to subsequent manipulation of data and also provide interesting insights about both the data miner and the adversary.

Keywords

Stackelberg game Nash equilibrium Loss minimization 

References

  1. Branch, M., Coleman, T., & Li, Y. (2000). A subspace, interior, and conjugate gradient method for large-scale bound-constrained minimization problems. SIAM Journal on Scientific Computing, 21(1), 1–23. CrossRefMathSciNetGoogle Scholar
  2. Byrd, R., Schnabel, R., & Shultz, G. (1988). Approximate solution of the trust region problem by minimization over two-dimensional subspaces. Mathematical Programming, 40(1), 247–263. MATHCrossRefMathSciNetGoogle Scholar
  3. Collins, M., Schapire, R., & Singer, Y. (2002). Logistic regression, AdaBoost and Bregman distances. Machine Learning, 48(1), 253–285. MATHCrossRefGoogle Scholar
  4. Dalvi, N., Domingos, P. Mausam, Sanghai, S., & Verma, D. (2004). Adversarial classification. In KDD’04: proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 99–108). New York: ACM. CrossRefGoogle Scholar
  5. Delany, S. J., Cunningham, P., Tsymbal, A., & Coyle, L. (2005). A case-based technique for tracking concept drift in spam filtering. Knowledge-Based Systems, 18(4–5), 187–195. CrossRefGoogle Scholar
  6. Demšar, J. (2006). Statistical comparisons of classifiers over multiple data sets. The Journal of Machine Learning Research, 7, 30. Google Scholar
  7. Dixit, A., & Skeath, S. (1999). Games of strategy. New York: Norton. Google Scholar
  8. Fudenberg, D., & Tirole, J. (1991). Game theory (1st ed.). Cambridge: MIT Press. Google Scholar
  9. Globerson, A., & Roweis, S. (2006). Nightmare at test time: robust learning by feature deletion. In Proceedings of the 23rd international conference on machine learning (pp. 353–360). New York: ACM. CrossRefGoogle Scholar
  10. Globerson, A., Teo, C. H., Smola, A., & Roweis, S. (2008). An adversarial view of covariate shift and a minimax approach. In Dataset shift in machine learning. Cambridge: MIT Press. Google Scholar
  11. Hastie, T., Tibshirani, R., & Friedman, J. (2001). The elements of statistical learning. Google Scholar
  12. Kantarcioglu, M., Xi, B., & Clifton, C. (2009). Classifier evaluation and attribute selection against active adversaries (Technical report). Department of Statistics, Purdue University. Google Scholar
  13. Keerthi, S., & DeCoste, D. (2006). A modified finite Newton method for fast solution of large scale linear SVMs. Journal of Machine Learning Research, 6(1), 341. MathSciNetGoogle Scholar
  14. Kołcz, A., & Teo, C. (2009). Feature weighting for improved classifier robustness. In CEAS’09: sixth conference on email and anti-spam. Google Scholar
  15. Lin, C., Weng, R., & Keerthi, S. (2008). Trust region Newton method for logistic regression. The Journal of Machine Learning Research, 9, 627–650. MathSciNetGoogle Scholar
  16. Liu, W., & Chawla, S. (2009). A game theoretical model for adversarial learning. In Proceedings of the 2009 IEEE international conference on data mining workshops (pp. 25–30). Los Alamitos: IEEE Comput. Soc. CrossRefGoogle Scholar
  17. Lowd, D., & Meek, C. (2005). Adversarial learning. In KDD’05: proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining (pp. 641–647). New York: ACM. CrossRefGoogle Scholar
  18. Moré, J., & Sorensen, D. (1983). Computing a trust region step. SIAM Journal on Scientific and Statistical Computing, 4, 553. MATHCrossRefGoogle Scholar
  19. Steihaug, T. (1983). The conjugate gradient method and trust regions in large scale optimization. SIAM Journal on Numerical Analysis, 20(3), 626–637. MATHCrossRefMathSciNetGoogle Scholar
  20. Witten, I., & Frank, E. (2002). Data mining: practical machine learning tools and techniques with Java implementations. ACM SIGMOD Record, 31(1), 76–77. CrossRefGoogle Scholar

Copyright information

© The Author(s) 2010

Authors and Affiliations

  1. 1.School of Information TechnologiesUniversity of SydneySydneyAustralia

Personalised recommendations