Skip to main content

Pixel Based Adversarial Attacks on Convolutional Neural Network Models

  • Conference paper
  • First Online:
Computational Intelligence in Data Science (ICCIDS 2021)

Abstract

Deep Neural Networks (DNN) has found their applications in the real time, for example, facial recognition for security in ATMs and self-driving cars. A major security threat to DNN is through adversarial attacks. An adversarial sample is an image that has been changed in such a way that it is imperceptible to human eye but causes the image to be misclassified by a Convolutional Neural Networks (CNN). The objective of this research work is to devise pixel based algorithms for adversarial attacks on images. For validating the algorithms, untargeted attack is performed on MNIST and CIFAR-10 dataset using techniques such as edge detection, Gradient weighted Class Activation Mapping (GRAD-CAM) and noise addition whereas targeted attack is performed on MNIST dataset using Saliency maps. These adversarial images thus generated are then passed to a CNN model and the misclassification results are analyzed. From the analysis, it has been inferred that it is easier to fool CNNs using untargeted attacks than the targeted attacks. Also, grayscale images (MNIST) are preferred to generate robust adversarial examples compared to colored images (CIFAR-10).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 99.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  • Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy (SP), pp. 39–57 (2017)

    Google Scholar 

  • Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., Mukhopadhyay, D.: Adversarial attacks and defences: a survey machine learning arXiv:1810.00069v1 (2018)

  • Chen, P.Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.J.: ZOO: zeroth order optimization based black-box attacks to deep neural, networks without training substitute models. In: ACM Workshop on Artificial Intelligence and Security (AISec@CCS), pp. 15–26 (2017)

    Google Scholar 

  • Dong, Y., et al.: Boosting adversarial attacks with momentum. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9185–9193 (2018)

    Google Scholar 

  • Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations, pp. 1–11 (2015)

    Google Scholar 

  • Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial machine learning at scale. In: International Conference on Learning Representations, pp. 1–17 (2015)

    Google Scholar 

  • Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput., 1–12 (2019). https://doi.org/10.1109/TEVC.2019.2890858

  • Szegedy, C.: Intriguing properties of neural networks. In: International Conference on Learning Representations, pp. 1–10 (2013)

    Google Scholar 

  • Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defences for deep learning. IEEE Trans. Neural Netw. Learn. Syst., 1–20 (2019). https://doi.org/10.1109/TNNLS.2018.2886017

  • Zhao, Y.S., Zhu, H., Shen, Q., Liang, R., Chen, K., Zhang, S.: Practical adversarial attack against object detector. In: USENIX Security Symposium arXiv:1812.10217v3 (2018)

  • CIFAR–10, March 2020. https://www.cs.toronto.edu/~kriz/cifar.html

  • MNIST, March 2020. http://yann.lecun.com/exdb/mnist/

  • VGG-16 trained on CIFAR-10, April 2020. https://github.com/mjiansun/cifar10-vgg16

  • Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., Madry, A.: Adversarial examples are not bugs, they are features. Computer Vision and Pattern Recognition (2019). arXiv:1905.02175v4

  • Vargas, A.V., Su, J.: Understanding the one-pixel attack: propagation maps and locality analysis (2019). arXiv:1902.02947v1

Download references

Acknowledgements

The authors sincerely thank Dept. of CSE, Sri Sivasubramaniya Nadar College of Engineering for the permission given to utilize the High Performance Computing Laboratory and its GPU server for successful execution of this project.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kavitha Srinivasan .

Editor information

Editors and Affiliations

Ethics declarations

Conflict of Interest

The authors confirm that there is no conflict of interest for this publication.

Rights and permissions

Reprints and permissions

Copyright information

© 2021 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Srinivasan, K., Jello Raveendran, P., Suresh, V., Anna Sundaram, N.R. (2021). Pixel Based Adversarial Attacks on Convolutional Neural Network Models. In: Krishnamurthy, V., Jaganathan, S., Rajaram, K., Shunmuganathan, S. (eds) Computational Intelligence in Data Science. ICCIDS 2021. IFIP Advances in Information and Communication Technology, vol 611. Springer, Cham. https://doi.org/10.1007/978-3-030-92600-7_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-92600-7_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-92599-4

  • Online ISBN: 978-3-030-92600-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics