Skip to main content

Robustness of Generalized Learning Vector Quantization Models Against Adversarial Attacks

  • Conference paper
  • First Online:
Advances in Self-Organizing Maps, Learning Vector Quantization, Clustering and Data Visualization (WSOM 2019)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 976))

Included in the following conference series:

Abstract

Adversarial attacks and the development of (deep) neural networks robust against them are currently two widely researched topics. The robustness of Learning Vector Quantization (LVQ) models against adversarial attacks has however not yet been studied to the same extent. We therefore present an extensive evaluation of three LVQ models: Generalized LVQ, Generalized Matrix LVQ and Generalized Tangent LVQ. The evaluation suggests that both Generalized LVQ and Generalized Tangent LVQ have a high base robustness, on par with the current state-of-the-art in robust neural network methods. In contrast to this, Generalized Matrix LVQ shows a high susceptibility to adversarial attacks, scoring consistently behind all other models. Additionally, our numerical evaluation indicates that increasing the number of prototypes per class improves the robustness of the models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Tensorflow: www.tensorflow.org; Keras: www.keras.io.

  2. 2.

    https://foolbox.readthedocs.io/en/latest/modules/zoo.html.

  3. 3.

    Hence, median-\(\delta _{A}\) can be \(\infty \) if for over \(50\%\) of the samples no adversary was found.

  4. 4.

    A restriction to \(\mathcal {X}\) leads to an accuracy decrease of less than \(1\%\).

  5. 5.

    Note that the results of [7] hold for GTLVQ as it can be seen as a version of GLVQ with infinitely many prototypes learning the affine subspaces.

  6. 6.

    For future work a more extensive evaluation should be considered: including not only the norm for which a single attack was optimized but rather a combination of all three norms. This gives a better insight on the characteristics of the attack and the defending model. The \(L^{0}\) norm can be interpreted as the number of pixels that have to change, the \(L^{\infty }\) norm as the maximum deviation of a pixel and the \(L^{2}\) norm as a kind of average pixel change. As attacks are optimized for a certain norm, only considering this norm might give a skewed impression of their attacking capability. Continuing, calculating a threshold accuracy including only adversaries that are below all three thresholds may give an interesting and more meaningful metric.

  7. 7.

    GTLVQ can be seen as localized version of GMLVQ with the constraint that the \(\varvec{\varOmega }\) matrices must be orthogonal projectors.

  8. 8.

    A similar effect was observed in [10] for k-NN models.

References

  1. Goodfellow I, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: International conference on learning representations

    Google Scholar 

  2. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2014) Intriguing properties of neural networks. In: International conference on learning representations

    Google Scholar 

  3. Elsayed G, Krishnan D, Mobahi H, Regan K, Bengio S (2018) Large margin deep networks for classification. In: Advances in neural information processing systems, pp 850–860

    Google Scholar 

  4. Stutz D, Hein M, Schiele B (2018) Disentangling adversarial robustness and generalization. arXiv preprint arXiv:1812.00740

  5. Kohonen T (1988) Learning vector quantization. Neural networks, 1(Supplement 1)

    Google Scholar 

  6. Sato A, Yamada K (1996) Generalized learning vector quantization. In: Advances in neural information processing systems, pp 423–429

    Google Scholar 

  7. Crammer K, Gilad-Bachrach R, Navot A, Tishby N (2003) Margin analysis of the LVQ algorithm. In: Advances in neural information processing systems, pp 479–486

    Google Scholar 

  8. Schneider P, Biehl M, Hammer B (2009) Adaptive relevance matrices in learning vector quantization. Neural Comput 21(12):3532–3561

    Article  MathSciNet  Google Scholar 

  9. Saralajew S, Villmann T (2016) Adaptive tangent distances in generalized learning vector quantization for transformation and distortion invariant classification learning. In: 2016 international joint conference on neural networks (IJCNN). IEEE, pp 2672–2679

    Google Scholar 

  10. Schott L, Rauber J, Bethge M, Brendel W (2019) Towards the first adversarially robust neural network model on MNIST. In: International conference on learning representations

    Google Scholar 

  11. Rauber J, Brendel W, Bethge M (2017) Foolbox: a python toolbox to benchmark the robustness of machine learning models. arXiv preprint arXiv:1707.04131

  12. Kurakin A, Goodfellow I, Bengio S (2016) Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533

  13. Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X, Li J (2018) Boosting adversarial attacks with momentum. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9185–9193

    Google Scholar 

  14. Moosavi-Dezfooli S-M, Fawzi A, Frossard P (2016) DeepFool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2574–2582

    Google Scholar 

  15. Brendel W, Rauber J, Bethge M (2018) Decision-based adversarial attacks: reliable attacks against black-box machine learning models. In: Proceedings of the 6th international conference on learning representations

    Google Scholar 

  16. Kingma DP, Ba JL (2015) Adam: a method for stochastic optimization. In: Proceedings of the international conference on learning representations, pp 1–13

    Google Scholar 

  17. Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: International conference on learning representations

    Google Scholar 

  18. Athalye A, Carlini N, Wagner D (2018) Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: Proceedings of the 35th international conference on machine learning

    Google Scholar 

  19. Globerson A, Roweis S (2006) Metric learning by collapsing classes. In: Advances in neural information processing systems, pp 451–458

    Google Scholar 

  20. Schneider P, Bunte K, Stiekema H, Hammer B, Villmann T, Biehl M (2010) Regularization in matrix relevance learning. IEEE Trans Neural Netw 21(5):831–840

    Article  Google Scholar 

  21. Croce F, Andriushchenko M, Hein M (2018) Provable robustness of ReLU networks via maximization of linear regions. arXiv preprint arXiv:1810.07481

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sascha Saralajew .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Saralajew, S., Holdijk, L., Rees, M., Villmann, T. (2020). Robustness of Generalized Learning Vector Quantization Models Against Adversarial Attacks. In: Vellido, A., Gibert, K., Angulo, C., Martín Guerrero, J. (eds) Advances in Self-Organizing Maps, Learning Vector Quantization, Clustering and Data Visualization. WSOM 2019. Advances in Intelligent Systems and Computing, vol 976. Springer, Cham. https://doi.org/10.1007/978-3-030-19642-4_19

Download citation

Publish with us

Policies and ethics