Advertisement

Deep Networks with RBF Layers to Prevent Adversarial Examples

  • Petra VidnerováEmail author
  • Roman Neruda
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10841)

Abstract

We propose a simple way to increase the robustness of deep neural network models to adversarial examples. The new architecture obtained by stacking deep neural network and RBF network is proposed. It is shown on experiments that such architecture is much more robust to adversarial examples than the original one while its accuracy on legitimate data stays more or less the same.

Keywords

Adversarial examples RBF networks Deep neural networks Convolutional networks 

Notes

Acknowledgments

This work was partially supported by the Czech Grant Agency grant GA18-23827S and institutional support of the Institute of Computer Science RVO 67985807.

Access to computing and storage facilities owned by parties and projects contributing to the National Grid Infrastructure MetaCentrum provided under the programme “Projects of Large Research, Development, and Innovations Infrastructures” (CESNET LM2015042), is greatly appreciated.

References

  1. 1.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)zbMATHGoogle Scholar
  2. 2.
    Bengio, Y.: Learning deep architectures for AI. Found. Trends Mach. Learn. 2(1), 1–127 (2009)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Hinton, G.E.: Learning multiple layers of representation. Trends Cognit. Sci. 11, 428–434 (2007)CrossRefGoogle Scholar
  4. 4.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Bartlett, P., Pereira, F., Burges, C., Bottou, L., Weinberger, K. (eds.) Advances in Neural Information Processing Systems, vol. 25, pp. 1106–1114. Neural Information Processing Systems Foundation (2012)Google Scholar
  5. 5.
    Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks (2013). arXiv:1312.6199
  6. 6.
    Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples (2014). arXiv:1412.6572
  7. 7.
    LeCun, Y., Cortes, C.: The MNIST database of handwritten digits (2012)Google Scholar
  8. 8.
    Nguyen, A.M., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. CoRR abs/1412.1897 (2014)Google Scholar
  9. 9.
    Papernot, N., McDaniel, P.D., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. CoRR abs/1511.07528 (2015)Google Scholar
  10. 10.
    Papernot, N., McDaniel, P.D., Goodfellow, I.J., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against deep learning systems using adversarial examples. CoRR abs/1602.02697 (2016)Google Scholar
  11. 11.
    Vidnerová, P., Neruda, R.: Evolutionary generation of adversarial examples for deep and shallow machine learning models, pp. 43:1–43:7 (2016)Google Scholar
  12. 12.
    Gu, S., Rigazio, L.: Towards deep neural network architectures robust to adversarial examples. CoRR abs/1412.5068 (2014)Google Scholar
  13. 13.
    Papernot, N., McDaniel, P.D., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. CoRR abs/1511.04508 (2015)Google Scholar
  14. 14.
    Moody, J., Darken, C.: Fast learning in networks of locally-tuned processing units. Neural Comput. 1, 289–303 (1989)CrossRefGoogle Scholar
  15. 15.
    Poggio, T., Girosi, F.: A theory of networks for approximation and learning. Technical report, Cambridge, MA, USA (1989) A. I. Memo No. 1140, C.B.I.P. Paper No. 31Google Scholar
  16. 16.
    Broomhead, D., Lowe, D.: Multivariable functional interpolation and adaptive networks. Complex Syst. 2, 321–355 (1988)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Peng, J.X., Li, K., Irwin, G.W.: A novel continuous forward algorithm for RBF neural modelling. IEEE Trans. Autom. Control 52(1), 117–122 (2007)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Fu, X., Wang, L.: Data dimensionality reduction with application to simplifying RBF network structure and improving classification performance. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 33(3), 399–409 (2003)CrossRefGoogle Scholar
  19. 19.
    Powel, M.: Radial basis functions for multivariable interpolation: a review. In: IMA Conference on Algorithms for the Approximation of Functions and Data, RMCS, Shrivenham, England, pp. 143–167 (1985)Google Scholar
  20. 20.
    Light, W.: Some aspects of radial basis function approximation. In: Approximation Theory, Spline Functions and Applications, pp. 163–190. Kluwer Academic Publishers, Dordrecht (1992)CrossRefGoogle Scholar
  21. 21.
    Neruda, R., Kudová, P.: Learning methods for radial basis functions networks. Future Gener. Comput. Syst. 21, 1131–1142 (2005)CrossRefGoogle Scholar
  22. 22.
    Neruda, R., Kudová, P.: Hybrid learning of RBF networks. Neural Netw. World 12(6), 573–585 (2002)zbMATHGoogle Scholar
  23. 23.
    Papernot, N., et al.: cleverhans v2.0.0: an adversarial machine learning library. arXiv preprint arXiv:1610.00768 (2017)
  24. 24.
    Chollet, F.: Keras (2015). https://github.com/fchollet/keras
  25. 25.
    Vidnerová, P.: RBF for keras (2017). https://github.com/PetraVidnerova/rbf_keras
  26. 26.
    Vidnerová, P.: Experiments with deep RBF networks (2017). https://github.com/PetraVidnerova/rbf_tests

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Institute of Computer ScienceCzech Academy of SciencesPragueCzech Republic

Personalised recommendations