Advertisement

Backdoor Attacks in Neural Networks – A Systematic Evaluation on Multiple Traffic Sign Datasets

  • Huma Rehman
  • Andreas Ekelhart
  • Rudolf MayerEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11713)

Abstract

Machine learning, and deep learning in particular, has seen tremendous advances and surpassed human-level performance on a number of tasks. Currently, machine learning is increasingly integrated in many applications and thereby, becomes part of everyday life, and automates decisions based on predictions. In certain domains, such as medical diagnosis, security, autonomous driving, and financial trading, wrong predictions can have a significant influence on individuals and groups. While advances in prediction accuracy have been impressive, machine learning systems still can make rather unexpected mistakes on relatively easy examples, and the robustness of algorithms has become a reason for concern before deploying such systems in real-world applications. Recent research has shown that especially deep neural networks are susceptible to adversarial attacks that can trigger such wrong predictions. For image analysis tasks, these attacks are in the form of small perturbations that remain (almost) imperceptible to human vision. Such attacks can cause a neural network classifier to completely change its prediction about an image, with the model even reporting a high confidence about the wrong prediction. Of particular interest for an attacker are so-called backdoor attacks, where a specific key is embedded into a data sample, to trigger a pre-defined class prediction. In this paper, we systematically evaluate the effectiveness of poisoning (backdoor) attacks on a number of benchmark datasets from the domain of autonomous driving.

Keywords

Deep learning Robustness Adversarial attacks Backdoor attacks 

Notes

Acknowledgments

The competence center SBA Research (SBA-K1) is funded within the framework of COMET — Competence Centers for Excellent Technologies by BMVIT, BMDW, and the federal state of Vienna, managed by the FFG.

References

  1. 1.
    Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. CoRR abs/1807.00459 (2018). http://arxiv.org/abs/1807.00459
  2. 2.
    Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. CoRR abs/1712.05526 (2017). http://arxiv.org/abs/1712.05526
  3. 3.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: International Conference on Computer Vision & Pattern Recognition (CVPR 2005), vol. 1, pp. 886–893. IEEE Computer Society (2005)Google Scholar
  4. 4.
    Dalvi, N., Domingos, P., Sanghai, S., Verma, D., et al.: Adversarial classification. In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 99–108. ACM (2004)Google Scholar
  5. 5.
    Donahue, J., et al.: DeCAF: a deep convolutional activation feature for generic visual recognition. In: Proceedings of the 31st International Conference on Machine Learning, Bejing, China, 22–24 June, pp. 647–655 (2014)Google Scholar
  6. 6.
    Eykholt, K., et al.: Robust physical-world attacks on deep learning visual classification. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018)Google Scholar
  7. 7.
    Gu, T., Dolan-Gavitt, B., Garg, S.: BadNets: identifying vulnerabilities in the machine learning model supply chain. In: Proceedings of the Machine Learning and Computer Security Workshop, Long Beach, CA, USA, 8 December 2017 (2017). http://arxiv.org/abs/1708.06733
  8. 8.
    Lowd, D., Meek, C.: Adversarial learning. In: Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, pp. 641–647. ACM (2005)Google Scholar
  9. 9.
    Newsome, J., Karp, B., Song, D.: Paragraph: thwarting signature learning by training maliciously. In: Zamboni, D., Kruegel, C. (eds.) RAID 2006. LNCS, vol. 4219, pp. 81–105. Springer, Heidelberg (2006).  https://doi.org/10.1007/11856214_5CrossRefGoogle Scholar
  10. 10.
    Paparoditis, N., et al.: Stereopolis ii: a multi-purpose and multi-sensor 3D mobile mapping system for street visualisation and 3D metrology. Revue Française Photogramm. Télédétection 200(1), 69–79 (2012)Google Scholar
  11. 11.
    Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506–519. ACM (2017)Google Scholar
  12. 12.
    Pratt, L.Y.: Discriminability-based transfer between neural networks. In: Advances in Neural Information Processing Systems (NIPS), pp. 204–211 (1993)Google Scholar
  13. 13.
    Razavian, A.S., Azizpour, H., Sullivan, J., Carlsson, S.: CNN features off-the-shelf: an astounding baseline for recognition. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 512–519, June 2014.  https://doi.org/10.1109/CVPRW.2014.131, http://arxiv.org/abs/1403.6382
  14. 14.
    Serna, C.G., Ruichek, Y.: Classification of traffic signs: the European dataset. IEEE Access 6, 78136–78148 (2018)CrossRefGoogle Scholar
  15. 15.
    Shen, S., Tople, S., Saxena, P.: A uror: defending against poisoning attacks in collaborative deep learning systems. In: Proceedings of the 32nd Annual Conference on Computer Security Applications, pp. 508–519. ACM (2016)Google Scholar
  16. 16.
    Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition. Neural Netw. 32, 323–332 (2012)CrossRefGoogle Scholar
  17. 17.
    Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations. Banff, Canada, 14–16 April 2014 (2014) http://arxiv.org/abs/1312.6199
  18. 18.
    Timofte, R., Zimmermann, K., Van Gool, L.: Multi-view traffic sign detection, recognition, and 3D localisation. Mach. Vis. Appl. 25(3), 633–647 (2014).  https://doi.org/10.1007/s00138-011-0391-3CrossRefGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2019

Authors and Affiliations

  1. 1.SBA ResearchViennaAustria

Personalised recommendations