Skip to main content

Demons Hidden in the Light: Unrestricted Adversarial Illumination Attacks

  • Conference paper
  • First Online:
Edge Computing and IoT: Systems, Management and Security (ICECI 2022)

Abstract

As deep learning-based computer vision is widely used in IoT devices, it is especially critical to ensure its security. Among the attacks against deep neural networks, adversarial attacks are a stealthy means of attack, which can mislead model decisions during the testing phase. Therefore, the exploration of adversarial attacks can help to understand the vulnerability of models in advance and make targeted defense.

Existing unrestricted adversarial attacks beyond the \(\ell _p\) norm often require additional models to be both adversarial and imperceptible, which leads to a high computational cost and task-specific design. Inspired by the observation that models exhibit unexpected vulnerability to changes in illumination, we develop Adversarial Illumination Attack (AIA), an unrestricted adversarial attack that imposes large but imperceptible alterations to the image.

The core of the attack lies in simulating adversarial illumination through Planckian jitter, of which the effectiveness comes from a causal chain where the attacker misleads the model by manipulating the confusion factor. We propose an efficient approach to generate adversarial samples without additional models by image gradient regularization. We validate the effectiveness of adversarial illumination in the face of black-box models, data preprocessing, and adversarially trained models through extensive experiments. Experiment results confirm that AIA can be both a lightweight unrestricted attack and a plug-in to boost the effectiveness of other attacks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Xiao, Z., et al.: Trajdata: on vehicle trajectory collection with commodity plug-and-play OBU devices. IEEE Internet Things J. 7(9), 9066–9079 (2020)

    Article  Google Scholar 

  2. Li, J., et al.: Drive2friends: inferring social relationships from individual vehicle mobility data. IEEE Internet Things J. 7(6), 5116–5127 (2020)

    Article  Google Scholar 

  3. Long, W., et al.: Unified spatial-temporal neighbor attention network for dynamic traffic prediction. IEEE Trans. Veh. Technol. (2022)

    Google Scholar 

  4. Huang, Y., Xiao, Z., Yu, X., Wang, D., Havyarimana, V., Bai, J.: Road network construction with complex intersections based on sparsely sampled private car trajectory data. ACM Trans. Knowl. Discov. Data (TKDD) 13(3), 1–28 (2019)

    Article  Google Scholar 

  5. Jiang, H., Xiao, Z., Li, Z., Xu, J., Zeng, F., Wang, D.: An energy-efficient framework for internet of things underlaying heterogeneous small cell networks. IEEE Trans. Mob. Comput. 21(1), 31–43 (2020)

    Article  Google Scholar 

  6. Xiao, Z., et al.: Resource management in UAV-assisted MEC: state-of-the-art and open challenges. Wireless Netw. 28(7), 3305–3322 (2022)

    Article  Google Scholar 

  7. Dai, X.: Task co-offloading for d2d-assisted mobile edge computing in industrial internet of things. IEEE Trans. Ind. Inform. 19(1), 480–490 (2022)

    Article  MathSciNet  Google Scholar 

  8. Zeng, F., Li, Q., Xiao, Z., Havyarimana, V., Bai, J.: A price-based optimization strategy of power control and resource allocation in full-duplex heterogeneous macrocell-femtocell networks. IEEE Access 6, 42004–42013 (2018)

    Article  Google Scholar 

  9. Xiao, Z., et al.: A joint information and energy cooperation framework for CR-enabled macro-femto heterogeneous networks. IEEE Internet Things J. 7(4), 2828–2839 (2019)

    Article  Google Scholar 

  10. Szegedy, C.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  11. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  12. Carlini, N., Wagner, D.: Audio adversarial examples: targeted attacks on speech-to-text. In: 2018 IEEE Security and Privacy Workshops (SPW), pp. 1–7. IEEE (2018)

    Google Scholar 

  13. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)

  14. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)

    Google Scholar 

  15. Dong, Y., et al.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9185–9193 (2018)

    Google Scholar 

  16. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43

    Chapter  Google Scholar 

  17. Sharif, M., Bauer, L., Reiter, M.K.: On the suitability of LP-norms for creating and preventing adversarial examples. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1605–1613 (2018)

    Google Scholar 

  18. Liu, F., Zhang, C., Zhang, H.: Towards transferable unrestricted adversarial examples with minimum changes. arXiv preprint arXiv:2201.01102 (2022)

  19. Xiang, T., Liu, H., Guo, S., Gan, Y., Liao, X.: Egm: an efficient generative model for unrestricted adversarial examples. ACM Trans. Sens. Netw. (TOSN) 18(4), 1–25 (2022)

    Article  Google Scholar 

  20. Zhao, Z., Liu, Z., Larson, M.: Adversarial color enhancement: generating unrestricted adversarial images by optimizing a color filter. arXiv preprint arXiv:2002.01008 (2020)

  21. Xu, Q., Tao, G., Cheng, S., Zhang, X.: Towards feature space adversarial attack by style perturbation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 10523–10531 (2021)

    Google Scholar 

  22. Duan, R., Ma, X., Wang, Y., Bailey, J., Qin, A.K., Yang, Y.: Adversarial camouflage: hiding physical-world attacks with natural styles. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1000–1008 (2020)

    Google Scholar 

  23. Laidlaw, C., Singla, S., Feizi, S.: Perceptual adversarial robustness: defense against unseen threat models. In: ICLR (2021)

    Google Scholar 

  24. Moosavi-Dezfooli, S.-M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)

    Google Scholar 

  25. Chen, P.-Y., Sharma, Y., Zhang, H., Yi, J., Hsieh, C.-J.: Ead: elastic-net attacks to deep neural networks via adversarial examples. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  26. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy (EuroS &P), pp. 372–387. IEEE (2016)

    Google Scholar 

  27. Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019)

    Article  Google Scholar 

  28. Brown, T.B., Mané, D., Roy, A., Abadi, M., Gilmer, J.: Adversarial patch. arXiv preprint arXiv:1712.09665 (2017)

  29. Liu, X., Yang, H., Liu, Z., Song, L., Li, H., Chen, Y.: Dpatch: an adversarial patch attack on object detectors. arXiv preprint arXiv:1806.02299 (2018)

  30. Laidlaw, C., Feizi, S.: Functional adversarial attacks. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  31. Alaifari, R., Alberti, G.S., Gauksson, T.: Adef: an iterative algorithm to construct adversarial deformations. arXiv preprint arXiv:1804.07729 (2018)

  32. Xiao, C., Zhu, J.-Y., Li, B., He, W., Liu, M., Song, D.: Spatially transformed adversarial examples. arXiv preprint arXiv:1801.02612 (2018)

  33. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018)

    Google Scholar 

  34. Hosseini, H., Poovendran, R.: Semantic adversarial examples. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1614–1619 (2018)

    Google Scholar 

  35. Shamsabadi, A.S., Sanchez-Matilla, R., Cavallaro, A.: Colorfool: semantic adversarial colorization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1151–1160 (2020)

    Google Scholar 

  36. Bhattad, A., Chong, M.J., Liang, K., Li, B., Forsyth, D.A.: Unrestricted adversarial examples via semantic manipulation. arXiv preprint arXiv:1904.06347 (2019)

  37. Das, N.: Shield: fast, practical defense and vaccination for deep learning using jpeg compression. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 196–204 (2018)

    Google Scholar 

  38. Guo, C., Rana, M., Cisse, M., Van Der Maaten, L.: Countering adversarial images using input transformations. arXiv preprint arXiv:1711.00117 (2017)

  39. Meng, D., Chen, H.: Magnet: a two-pronged defense against adversarial examples. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 135–147 (2017)

    Google Scholar 

  40. Raghunathan, A., Xie, S.M., Yang, F., Duchi, J., Liang, P.: Understanding and mitigating the tradeoff between robustness and accuracy. arXiv preprint arXiv:2002.10716 (2020)

  41. Zhang, H., Yu, Y., Jiao, J., Xing, E., El Ghaoui, L., Jordan, M.: Theoretically principled trade-off between robustness and accuracy. In: International Conference on Machine Learning, pp. 7472–7482. PMLR (2019)

    Google Scholar 

  42. Wu, D., Xia, S.-T., Wang, Y.: Adversarial weight perturbation helps robust generalization. Adv. Neural. Inf. Process. Syst. 33, 2958–2969 (2020)

    Google Scholar 

  43. Pang, T., Yang, X., Dong, Y., Xu, K., Zhu, J., Su, H.: Boosting adversarial training with hypersphere embedding. Adv. Neural. Inf. Process. Syst. 33, 7779–7792 (2020)

    Google Scholar 

  44. Zini, S., Buzzelli, M., Twardowski, B., van de Weijer, J.: Planckian jitter: enhancing the color quality of self-supervised visual representations. arXiv preprint arXiv:2202.07993 (2022)

  45. Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations. arXiv preprint arXiv:1903.12261 (2019)

  46. Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., Madry, A.: Adversarial examples are not bugs, they are features. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  47. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  48. Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint arXiv:1605.07146 (2016)

  49. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)

    Google Scholar 

  50. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.-C.: Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)

    Google Scholar 

  51. Tan, M., Le, Q.: Efficientnet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114. PMLR (2019)

    Google Scholar 

  52. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  53. Deng, J., et al.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)

    Google Scholar 

  54. Zhang, H., Wang, J.: Defense against adversarial attacks using feature scattering-based adversarial training. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yanjiao Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, K., Chen, Y., Xu, W. (2023). Demons Hidden in the Light: Unrestricted Adversarial Illumination Attacks. In: Xiao, Z., Zhao, P., Dai, X., Shu, J. (eds) Edge Computing and IoT: Systems, Management and Security. ICECI 2022. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 478. Springer, Cham. https://doi.org/10.1007/978-3-031-28990-3_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-28990-3_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-28989-7

  • Online ISBN: 978-3-031-28990-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics