Advertisement

Timing Attacks on Machine Learning: State of the Art

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1037)

Abstract

Machine learning plays a significant role in today’s business sectors and governments, in which it is becoming more utilized as tools to help in decision making and automation process. However, these tools are not inherently robust and secure, and could be vulnerable to adversarial modification and cause false classification or risk in the system security. As such, the field of adversarial machine learning has emerged to study vulnerabilities of machine learning models and algorithms, and make them secure against adversarial manipulation. In this paper, we present the recently proposed taxonomy for attacks on machine learning and draw distinctions between other taxonomies. Moreover, this paper brings together the state of the art in theory and practice needed for decision timing attacks on machine learning and defense strategies against them. Considering the increasing research interest in this field, we hope this study provides readers with the essential knowledge to successfully engage in research and practice of machine learning in adversarial environment.

Keywords

Adversarial machine learning Timing attacks Manipulation Learning models 

References

  1. 1.
    Wolpert, D.: No free lunch theorem for optimization. IEEE Trans. Evol. Comput. 1, 467–482 (1997)CrossRefGoogle Scholar
  2. 2.
    Alpaydin, E.: Introduction to Machine Learning/Ethem Alpaydin. The MIT Press, Cambridge (2010)zbMATHGoogle Scholar
  3. 3.
    Chapelle, O., Zien, A.: Semi-supervised classification by low density separation. In: AISTATS 2005, vol. 2005, pp. 57–64 (2005)Google Scholar
  4. 4.
    Yang, B., Sun, J.-T., Wang, T., Chen, Z.: Effective multi-label active learning for text classification. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 917–926 (2009)Google Scholar
  5. 5.
    Lughofer, E.: Hybrid active learning for reducing the annotation effort of operators in classification systems. Pattern Recognit. 45(2), 884–896 (2012)CrossRefGoogle Scholar
  6. 6.
    Settles, B.: Active learning literature survey. 2010 Computer Sciences Technical Report 1648 (2014)Google Scholar
  7. 7.
    Settles, B., Craven, M., Ray, S.: Multiple-instance active learning. In: Advances in Neural Information Processing Systems, pp. 1289–1296 (2008)Google Scholar
  8. 8.
    Sutton, R.S., Barto, A.G.: Introduction to Reinforcement Learning, vol. 135. MIT Press, Cambridge (1998)zbMATHGoogle Scholar
  9. 9.
    Bojarski, M., et al.: End to end learning for self-driving cars. arXiv Prepr arXiv:1604.07316 (2016)
  10. 10.
    Chen, Z., Huang, X.: End-to-end learning for lane keeping of self-driving cars. In: 2017 IEEE Intelligent Vehicles Symposium (IV), pp. 1856–1860 (2017)Google Scholar
  11. 11.
    Bhowmick, A., Hazarika, S.M.: E-mail spam filtering: a review of techniques and trends. In: Advances in Electronics, Communication and Computing, pp. 583–590. Springer (2018)Google Scholar
  12. 12.
    Melo-Acosta, G.E., Duitama-Muñoz, F., Arias-Londoño, J.D.: Fraud detection in big data using supervised and semi-supervised learning techniques. In: 2017 IEEE Colombian Conference on Communications and Computing (COLCOM), pp. 1–6 (2017)Google Scholar
  13. 13.
    Ye, Y., Li, T., Adjeroh, D., Iyengar, S.S.: A survey on malware detection using data mining techniques. ACM Comput. Surv. 50(3), 41 (2017)CrossRefGoogle Scholar
  14. 14.
    Perdisci, R., Ariu, D., Giacinto, G.: Scalable fine-grained behavioral clustering of HTTP-based malware. Comput. Networks 57(2), 487–500 (2013)CrossRefGoogle Scholar
  15. 15.
    Lakhina, A., Crovella, M., Diot, C.: Diagnosing network-wide traffic anomalies. ACM SIGCOMM Comput. Commun. Rev. 34(4), 219–230 (2004)CrossRefGoogle Scholar
  16. 16.
    Wang, K., Parekh, J.J., Stolfo, S.J.: Anagram: a content anomaly detector resistant to mimicry attack. In: International Workshop on Recent Advances in Intrusion Detection, pp. 226–248 (2006)Google Scholar
  17. 17.
    Barreno, M., Nelson, B., Sears, R., Joseph, A.D., Tygar, J.D.: Can machine learning be secure? In: Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security, pp. 16–25 (2006)Google Scholar
  18. 18.
    Barreno, M., Nelson, B., Joseph, A.D., Tygar, J.D.: The security of machine learning. Mach. Learn. 81(2), 121–148 (2010)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Suciu, O., Marginean, R., Kaya, Y., Daumé III, H., Dumitras, T.: When does machine learning FAIL? Generalized transferability for evasion and poisoning attacks. arXiv Prepr. arXiv:1803.06975 (2018)
  20. 20.
    Alfeld, S., Zhu, X., Barford, P.: Explicit defense actions against test-set attacks. In: AAAI, pp. 1274–1280 (2017)Google Scholar
  21. 21.
    Rosenberg, I., Shabtai, A., Rokach, L., Elovici, Y.: Generic black-box end-to-end attack against state of the art API call based malware classifiers. In: International Symposium on Research in Attacks, Intrusions, and Defenses, pp. 490–510 (2018)CrossRefGoogle Scholar
  22. 22.
    Carlini, N., Wagner, D.: Audio adversarial examples: targeted attacks on speech-to-text. arXiv Prepr arXiv:1801.01944 (2018)
  23. 23.
    Xu, W., Qi, Y., Evans, D.: Automatically evading classifiers. In: Proceedings of the 2016 Network and Distributed Systems Symposium (2016)Google Scholar
  24. 24.
    Nelson, B., et al.: Query strategies for evading convex-inducing classifiers. J. Mach. Learn. Res. 13(May), 1293–1332 (2012)MathSciNetzbMATHGoogle Scholar
  25. 25.
    Vorobeychik, Y., Kantarcioglu, M.: Adversarial machine learning. Synth. Lect. Artif. Intell. Mach. Learn. 12(3), 1–169 (2018)CrossRefGoogle Scholar
  26. 26.
    Yao, G., Bi, J., Xiao, P.: Source address validation solution with OpenFlow/NOX architecture. In: 2011 19th IEEE International Conference on Network Protocols (ICNP), pp. 7–12 (2011)Google Scholar
  27. 27.
    Biggio, B., Fumera, G., Roli, F.: Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng. 26(4), 984–996 (2014)CrossRefGoogle Scholar
  28. 28.
    Li, B., Vorobeychik, Y.: Scalable optimization of randomized operational decisions in adversarial classification settings. In: Artificial Intelligence and Statistics, pp. 599–607 (2015)Google Scholar
  29. 29.
    Tambe, M.: Security and Game Theory: Algorithms, Deployed Systems. Lessons Learned. Cambridge University Press, Cambridge (2011)CrossRefGoogle Scholar
  30. 30.
    Miao, C., Li, Q., Xiao, H., Jiang, W., Huai, M., Su, L.: Towards data poisoning attacks in crowd sensing systems. In: Proceedings of the Eighteenth ACM International Symposium on Mobile Ad Hoc Networking and Computing, pp. 111–120 (2018)Google Scholar
  31. 31.
    Goodfellow, I., McDaniel, P., Papernot, N.: Making machine learning robust against adversarial inputs. Commun. ACM 61(7), 56–66 (2018)CrossRefGoogle Scholar
  32. 32.
    Hanif, M.A., Khalid, F., Putra, R.V.W., Rehman, S., Shafique, M.: Robust machine learning systems: reliability and security for deep neural networks. In: 2018 IEEE 24th International Symposium on On-Line Testing And Robust System Design (IOLTS), pp. 257–260 (2018)Google Scholar
  33. 33.
    Belkin, M., Matveeva, I., Niyogi, P.: Regularization and semi-supervised learning on large graphs. In: International Conference on Computational Learning Theory, pp. 624–638 (2004)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Norwegian University of Science and TechnologyGjøvikNorway

Personalised recommendations