Abstract
Adversarial attacks and the development of (deep) neural networks robust against them are currently two widely researched topics. The robustness of Learning Vector Quantization (LVQ) models against adversarial attacks has however not yet been studied to the same extent. We therefore present an extensive evaluation of three LVQ models: Generalized LVQ, Generalized Matrix LVQ and Generalized Tangent LVQ. The evaluation suggests that both Generalized LVQ and Generalized Tangent LVQ have a high base robustness, on par with the current state-of-the-art in robust neural network methods. In contrast to this, Generalized Matrix LVQ shows a high susceptibility to adversarial attacks, scoring consistently behind all other models. Additionally, our numerical evaluation indicates that increasing the number of prototypes per class improves the robustness of the models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Tensorflow: www.tensorflow.org; Keras: www.keras.io.
- 2.
- 3.
Hence, median-\(\delta _{A}\) can be \(\infty \) if for over \(50\%\) of the samples no adversary was found.
- 4.
A restriction to \(\mathcal {X}\) leads to an accuracy decrease of less than \(1\%\).
- 5.
Note that the results of [7] hold for GTLVQ as it can be seen as a version of GLVQ with infinitely many prototypes learning the affine subspaces.
- 6.
For future work a more extensive evaluation should be considered: including not only the norm for which a single attack was optimized but rather a combination of all three norms. This gives a better insight on the characteristics of the attack and the defending model. The \(L^{0}\) norm can be interpreted as the number of pixels that have to change, the \(L^{\infty }\) norm as the maximum deviation of a pixel and the \(L^{2}\) norm as a kind of average pixel change. As attacks are optimized for a certain norm, only considering this norm might give a skewed impression of their attacking capability. Continuing, calculating a threshold accuracy including only adversaries that are below all three thresholds may give an interesting and more meaningful metric.
- 7.
GTLVQ can be seen as localized version of GMLVQ with the constraint that the \(\varvec{\varOmega }\) matrices must be orthogonal projectors.
- 8.
A similar effect was observed in [10] for k-NN models.
References
Goodfellow I, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: International conference on learning representations
Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2014) Intriguing properties of neural networks. In: International conference on learning representations
Elsayed G, Krishnan D, Mobahi H, Regan K, Bengio S (2018) Large margin deep networks for classification. In: Advances in neural information processing systems, pp 850–860
Stutz D, Hein M, Schiele B (2018) Disentangling adversarial robustness and generalization. arXiv preprint arXiv:1812.00740
Kohonen T (1988) Learning vector quantization. Neural networks, 1(Supplement 1)
Sato A, Yamada K (1996) Generalized learning vector quantization. In: Advances in neural information processing systems, pp 423–429
Crammer K, Gilad-Bachrach R, Navot A, Tishby N (2003) Margin analysis of the LVQ algorithm. In: Advances in neural information processing systems, pp 479–486
Schneider P, Biehl M, Hammer B (2009) Adaptive relevance matrices in learning vector quantization. Neural Comput 21(12):3532–3561
Saralajew S, Villmann T (2016) Adaptive tangent distances in generalized learning vector quantization for transformation and distortion invariant classification learning. In: 2016 international joint conference on neural networks (IJCNN). IEEE, pp 2672–2679
Schott L, Rauber J, Bethge M, Brendel W (2019) Towards the first adversarially robust neural network model on MNIST. In: International conference on learning representations
Rauber J, Brendel W, Bethge M (2017) Foolbox: a python toolbox to benchmark the robustness of machine learning models. arXiv preprint arXiv:1707.04131
Kurakin A, Goodfellow I, Bengio S (2016) Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533
Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X, Li J (2018) Boosting adversarial attacks with momentum. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9185–9193
Moosavi-Dezfooli S-M, Fawzi A, Frossard P (2016) DeepFool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2574–2582
Brendel W, Rauber J, Bethge M (2018) Decision-based adversarial attacks: reliable attacks against black-box machine learning models. In: Proceedings of the 6th international conference on learning representations
Kingma DP, Ba JL (2015) Adam: a method for stochastic optimization. In: Proceedings of the international conference on learning representations, pp 1–13
Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: International conference on learning representations
Athalye A, Carlini N, Wagner D (2018) Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: Proceedings of the 35th international conference on machine learning
Globerson A, Roweis S (2006) Metric learning by collapsing classes. In: Advances in neural information processing systems, pp 451–458
Schneider P, Bunte K, Stiekema H, Hammer B, Villmann T, Biehl M (2010) Regularization in matrix relevance learning. IEEE Trans Neural Netw 21(5):831–840
Croce F, Andriushchenko M, Hein M (2018) Provable robustness of ReLU networks via maximization of linear regions. arXiv preprint arXiv:1810.07481
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Saralajew, S., Holdijk, L., Rees, M., Villmann, T. (2020). Robustness of Generalized Learning Vector Quantization Models Against Adversarial Attacks. In: Vellido, A., Gibert, K., Angulo, C., MartÃn Guerrero, J. (eds) Advances in Self-Organizing Maps, Learning Vector Quantization, Clustering and Data Visualization. WSOM 2019. Advances in Intelligent Systems and Computing, vol 976. Springer, Cham. https://doi.org/10.1007/978-3-030-19642-4_19
Download citation
DOI: https://doi.org/10.1007/978-3-030-19642-4_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-19641-7
Online ISBN: 978-3-030-19642-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)