Skip to main content
Log in

Hiding from infrared detectors in real world with adversarial clothes

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Thermal infrared detection is widely used in many scenarios including fast body temperature monitoring, safety monitoring and autopilot, however, its safety research has not attracted sufficient attention. We proposed the adversarial clothing to test the safety of infrared detection, which could hide from infrared detectors in the real world. The adversarial clothing uses flexible carbon fiber heaters as the basic elements. We optimized the patterns formed by different heaters based on the adversarial example technique. The optimized pattern lowered the average precision (AP) of YOLOv3 by 66.33%, while the random pattern lowered the AP by only 31.33% in the digital world. We then manufactured the adversarial clothing and tested the safety of infrared detectors in the physical world. The adversarial clothing lowered the AP of YOLOv3 by 43.95%, while the clothing with randomly placed heaters lowered the AP of YOLOv3 by only 19.21%. With ensemble attack techniques, our attack method had good transferability to unseen CNN models. We tested five typical defense methods but achieved limited success. These results indicate that current thermal infrared detectors are not robust.

Graphical Abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data Availability

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

Notes

  1. We have made the THU_TIR dataset public. https://github.com/zxp555/THU_TIR-dataset

References

  1. Zhu X, Li X, Li J, Wang Z, Hu X (2021) Fooling thermal infrared pedestrian detectors in real world using small bulbs. Proceedings of the AAAI conference on artificial intelligence 35:3616–3624

    Article  Google Scholar 

  2. Breland DS, Dayal A, Jha A, Yalavarthy PK, Pandey OJ, Cenkeramaddi LR (2021) Robust hand gestures recognition using a deep cnn and thermal images. IEEE Sensors J 21(23):26602–26614

    Article  Google Scholar 

  3. Xu C, Li Q, Zhou M, Zhou Q, Zhou Y, Ma Y (2022) Rgb-t salient object detection via cnn feature and result saliency map fusion. Appl Intell 52(10):11343–11362

    Article  Google Scholar 

  4. Zhang J, Qian W, Nie R, Cao J, Xu D (2023) Generate adversarial examples by adaptive moment iterative fast gradient sign method. Appl Intell 53(1):1101–1114

    Article  Google Scholar 

  5. Aldahdooh A, Hamidouche W, Déforges O (2023) Revisiting model’s uncertainty and confidences for adversarial example detection. Appl Intell 53(1):509–531

    Article  Google Scholar 

  6. Sarvar A, Amirmazlaghani M (2023) Defense against adversarial examples based on wavelet domain analysis. Appl Intell 53(1):423–439

    Article  Google Scholar 

  7. Yu T, Wang S, Yu X (2022) Global wasserstein margin maximization for boosting generalization in adversarial training. Appl Intell 1–15

  8. Zhang J, Li C (2019) Adversarial examples: opportunities and challenges. IEEE Trans Neural Netw Learn Syst 31(7):2578–2593

    MathSciNet  Google Scholar 

  9. Yuan X, He P, Zhu Q, Li X (2019) Adversarial examples: attacks and defenses for deep learning. IEEE Trans Neural Netw Learn Syst 30(9):2805–2824

    Article  MathSciNet  Google Scholar 

  10. Wang J, Liu A, Bai X, Liu X (2021) Universal adversarial patch attack for automatic checkout using perceptual and attentional bias. IEEE Trans Image Process 31:598–611

    Article  Google Scholar 

  11. Rahman A, Hossain MS, Alrajeh NA, Alsolami F (2020) Adversarial examples-security threats to covid-19 deep learning systems in medical iot devices. IEEE Internet of Things J 8(12):9603–9610

    Article  Google Scholar 

  12. Xue M, Yuan C, He C, Wang J, Liu W (2021) Naturalae: natural and robust physical adversarial examples for object detectors. J Inf Secur Appl 57:102694

    Google Scholar 

  13. Ren H, Huang T, Yan H (2021) Adversarial examples: attacks and defenses in the physical world. Int J Mach Learn Cybern 1–12

  14. Huang S, Liu X, Yang X, Zhang Z (2021) An improved shapeshifter method of generating adversarial examples for physical attacks on stop signs against faster r-cnns. Comput Secur 104:102120

    Article  Google Scholar 

  15. Zhang B, Tondi B, Barni M (2020) Adversarial examples for replay attacks against cnn-based face recognition with anti-spoofing capability. Comp Vision Image Underst 197:102988

    Article  Google Scholar 

  16. Zhu X, Hu Z, Huang S, Li J, Hu X (2022) Infrared invisible clothing: Hiding from infrared detectors at multiple angles in realworld. In: IEEE conference computer vision and pattern recognition

  17. Wei H, Wang Z, Jia X, Zheng Y, TangH, Satoh S, Wang Z (2023) Hotcold block: fooling thermal infrared detectors with a novel wearable design. In: Proceedings of the AAAI conference on artificial intelligence

  18. Redmon J, Farhadi A (2018) Yolov3: an incremental improvement. CoRR arXiv:1804.02767

  19. Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C-Y, Berg AC (2016) Ssd: Single shot multibox detector. In: European conference computer vision

  20. Lin T-Y, Goyal P, Girshick R, He K, Dollár P (2018) Focal loss for dense object detection. IEEE Trans Pattern Anal Mach Intell 42(2):318–327

    Article  Google Scholar 

  21. Tian Z, Shen C, Chen H, He T (2019) Fcos: fully convolutional one-stage object detection. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 9627–9636

  22. Ren S, He K, Girshick R, Sun J (2016) Faster r-cnn: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6)

  23. He K, Gkioxari G, Dollár P, Girshick R (2018) Mask r-cnn. IEEE Trans Pattern Anal Mach Intell 42(2):386–397

    Article  Google Scholar 

  24. Cai Z, Vasconcelos N (2018) Cascade r-cnn: Delving into high quality object detection. In: IEEE conference computer vision pattern recognition

  25. Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, Zagoruyko S (2020) End-to-end object detection with transformers. In: European conference on computer vision, pp 213–229

  26. Yu C, Liu Y, Wu S, Hu Z, Xia X, Lan D, Liu X (2022) Infrared small target detection based on multiscale local contrast learning networks. Infrared Phys Technol 123:104107

    Article  Google Scholar 

  27. Wang D, Lan J (2021) Ppdet: A novel infrared pedestrian detection network in a per-pixel prediction fashion. Infrared Phys Technol 119:103965

    Article  Google Scholar 

  28. Yan F, Xu G, Wu Q, Wang J, Li Z (2022) Infrared small target detection using kernel low-rank approximation and regularization terms for constraints. Infrared Phys Technol 104222

  29. Dai Y, Wu Y, Zhou F, Barnard K (2021) Attentional local contrast networks for infrared small target detection. IEEE Trans Geosci Remote Sens 59(11):9813–9824

    Article  Google Scholar 

  30. Dai X, Yuan X, Wei X (2021) Tirnet: object detection in thermal infrared images for autonomous driving. Appl Intell 51:1244–1261

    Article  Google Scholar 

  31. Kristo M, Ivasic-Kos M, Pobar M (2020) Thermal object detection in difficult weather conditions using YOLO. IEEE Access 8:125459–125476

    Article  Google Scholar 

  32. Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: International conference learning representation

  33. Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: International conference learning representation

  34. Moosavi-Dezfooli S-M, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2574–2582

  35. Kurakin A, Goodfellow IJ, Bengio S (2017) Adversarial machine learning at scale. In: International Conference Learning Representation

  36. Carlini N, Wagner DA (2017) Towards evaluating the robustness of neural networks. In: IEEE symposium on security and privacy, pp 39–57

  37. Maqsood M, Ghazanfar MA, Mehmood I, Hwang E, Rho S (2022) A meta-heuristic optimization based less imperceptible adversarial attack on gait based surveillance systems. J Sig Process Syst 1–23

  38. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow IJ, Fergus R (2014) Intriguing properties of neural networks. In: International Conference Learning Representation

  39. Yuan C, Wang H, He P, Luo J, Li B (2022) Gan-based image steganography for enhancing security via adversarial attack and pixel-wise deep fusion. Multimed Tools Appli 81(5):6681–6701

    Article  Google Scholar 

  40. Xiao C, Li B, Zhu J, He W, Liu M, Song D (2018) Generating adversarial examples with adversarial networks. In: IJCAI

  41. Hwang R-H, Lin J-Y, Hsieh S-Y, Lin H-Y, Lin C-L (2023) Adversarial patch attacks on deep-learning-based face recognition systems using generative adversarial networks. Sensors 23(2):853

    Article  Google Scholar 

  42. Shen M, Yu H, Zhu L, Xu K, Li Q, Hu J (2021) Effective and robust physical-world attacks on deep learning face recognition systems. IEEE Trans Inf Forensic Secur 16:4063–4077

    Article  Google Scholar 

  43. Chen C, Huang T (2021) Camdar-adv: generating adversarial patches on 3d object. Int J Intell Syst 36(3):1441–1453

    Article  MathSciNet  Google Scholar 

  44. Kim H, Lee C (2022) Upcycling adversarial attacks for infrared object detection. Neurocomputing 482:1–13

    Article  Google Scholar 

  45. Wei X, Guo Y, Yu J (2022) Adversarial sticker: a stealthy attack method in the physical world. IEEE Trans Pattern Anal Mach Intell

  46. Athalye A, Engstrom L, Ilyas A, Kwok K (2018) Synthesizing robust adversarial examples. In: Dy JG, Krause A (eds) Proceedings of the 35th international conference on machine learning, ICML

  47. Duan R, Mao X, Qin AK, Chen Y, Ye S, He Y, Yang Y (2021) Adversarial laser beam: effective physical-world attack to dnns in a blink. In: IEEE conference computer vision pattern recognition

  48. Thys S, Ranst WV, Goedemé T (2019) Fooling automated surveillance cameras: adversarial patches to attack person detection. In: IEEE conference on computer vision and pattern recognition workshops, CVPR workshops

  49. Xu K, Zhang G, Liu S, Fan Q, Sun M, Chen H, Chen P, Wang Y, Lin X (2020) Adversarial t-shirt! evading person detectors in a physical world. In: European conference computer vision

  50. Zhang H, Ma X (2022) Misleading attention and classification: an adversarial attack to fool object detection models in the real world. Comput Secur 122:102876

    Article  Google Scholar 

  51. Hu Y-C-T, Kung B-H, Tan DS, Chen J-C, Hua K-L, Cheng W-H (2021) Naturalistic physical adversarial patch for object detectors. In: International conference computer vision

  52. Hu Z, Huang S, Zhu X, Hu X, Sun F, Zhang B (2022) Adversarial texture for fooling person detectors in the physical world. In: IEEE conference computer vision pattern recognition

  53. Warps FLBP (1989) Thin-plate splines and the decompositions of deformations. IEEE Trans Pattern Anal Mach Intell 11(6)

  54. Huang L, Gao C, Zhou Y, Xie C, Yuille AL, Zou C, Liu N (2020) Universal physical camouflage attacks on object detectors. In: IEEE conference computer vision pattern recognition

  55. FLIR (2021) Free flir thermal dataset for algorithm training. [EB/OL]. https://www.flir.com/oem/adas/adas-dataset-form/ Accessed 12 Nov 2021

  56. Hwang S, Park J, Kim N, Choi Y, Kweon IS (2013) Multispectral pedestrian detection: benchmark dataset and baseline. Integr Comput-Aided Eng 20:347–360

    Google Scholar 

  57. Pang J, Chen K, Shi J, Feng H, Ouyang W, Lin D (2019) Libra r-cnn: towards balanced learning for object detection. In: IEEE conference computer vision pattern recognition

  58. ultralytics (2021) YOLOv5. [EB/OL]. https://github.com/ultralytics/yolov5 Accessed 21 Nov 2021

  59. Liu Y, Chen X, Liu C, Song D (2017) Delving into transferable adversarial examples and black-box attacks. In: International Conference Learning Representation

  60. Redmon J, Farhadi A (2017) YOLO9000: better, faster, stronger. In: IEEE conference computer vision pattern recognition

  61. Xu W, Evans D, Qi Y (2018) Feature squeezing: Detecting adversarial examples in deep neural networks. In: 25th annual network and distributed system security symposium, NDSS

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grants 61734004, U19B2034, 62061136001 and the Tsinghua-Toyota Joint Research Fund.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Xiaolin Hu or Zheyao Wang.

Ethics declarations

Ethical Approval

All experiments involving human participation were approved by the Department of Psychology Ethics Committee, Tsinghua University, Beijing, China.

Informed consent

Informed consent was obtained from all individual participants included in the study.

Conflicts of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhu, X., Hu, Z., Huang, S. et al. Hiding from infrared detectors in real world with adversarial clothes. Appl Intell 53, 29537–29555 (2023). https://doi.org/10.1007/s10489-023-05102-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-023-05102-5

Keywords

Navigation