Skip to main content

Backdoors hidden in facial features: a novel invisible backdoor attack against face recognition systems


Deep neural network (DNN) based face recognition system has become one of the most popular modalities for user identity authentication. However, some recent studies have indicated that, the malicious attackers can inject specific backdoors into the DNN model of a face recognition system, which is known as backdoor attack. As a result, the attacker can trigger the backdoors and impersonate someone else to log into the system, while not affecting the normal usage of the legitimate users. Existing studies use the accessories (such as purple sunglasses or bandanna) as the triggers of their backdoor attacks, which are visually conspicuous and can be easily perceptible by humans, thus result in the failure of backdoor attacks. In this paper, for the first time, we exploit the facial features as the carriers to embed the backdoors, and propose a novel backdoor attack method, named BHF2 (Backdoor Hidden in Facial Features). The BHF2 constructs the masks with the shapes of facial features (eyebrows and beard), and then injects the backdoors into the masks to ensure the visual stealthiness. Further, to make the backdoors look more natural, we propose BHF2N (Backdoor Hidden in Facial Features Naturally) method, which exploits the artificial intelligence (AI) based tool to auto-embed the natural backdoors. The generated backdoors are visually stealthy, which can guarantee the concealment of the backdoor attacks. The proposed methods (BHF2 and BHF2N) can be applied for those black-box attack scenarios, in which a malicious adversary has no knowledge of the target face recognition system. Moreover, the proposed attack methods are feasible for those strict identity authentication scenarios where the accessories are not permitted. Experimental results on two state-of-the-art face recognition models show that, the maximum success rate of the proposed attack method reaches 100% on DeepID1 and VGGFace models, while the accuracy degradation of target recognition models are as low as 0.01% (DeepID1) and 0.02% (VGGFace), respectively. Meantime, the generated backdoors can achieve visual stealthiness, where the pixel change rate of a backdoor instance relative to its clean face image is as low as 0.16%, and their structural and dHash similarity score are high up to 98.82% and 98.19%, respectively.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11




  1. Jiang W, Zhang L (2019) Geospatial data to images: A deep-learning framework for traffic forecasting. Tsinghua Sci Technol 24(1):52–64

    MathSciNet  Article  Google Scholar 

  2. Pagano C, Granger E, Sabourin R, Gorodnichy D (2012) Detector ensembles for face recognition in video surveillance. In: International joint conference on neural networks, pp 1–8

  3. Dang Q, Yin J, Wang B, Zheng W (2019) Deep learning based 2D human pose estimation: A survey. Tsinghua Sci Technol 24(6):663–676

    Article  Google Scholar 

  4. Choi K, Toh K-A, Byun H (2011) Realtime training on mobile devices for face recognition applications. Pattern Recogn 44(2):386–400

    Article  Google Scholar 

  5. Soyata T, Muraleedharan R, Funai C, Kwon M, Heinzelman WB (2012) Cloud-vision: Real-time face recognition using a mobile-cloudlet-cloud acceleration architecture. In: IEEE symposium on computers and communications, pp 59–66

  6. Zou M, Shi Y, Wang C, Li F, Song W, Wang Y (2018) PoTrojan: Powerful neural-level Trojan designs in deep learning models. arXiv:1802.03043

  7. Liu Y, Ma S, Aafer Y, Lee W-C, Zhai J, Wang W, Zhang X (2018) Trojaning attack on neural networks. In: 25th Annual network and distributed system security symposium, pp 1–15

  8. Gu T, Dolan-Gavitt B, Garg S (2017) Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv:1708.06733

  9. Li S, Zhao BZH, Yu J, Xue M, Kaafar D, Zhu H (2019) Invisible backdoor attacks against deep neural networks. arXiv:1909.02742

  10. Xue M, He C, Wang J, Liu W (2020) One-to-N & N-to-One: Two advanced backdoor attacks against deep learning models. IEEE Transactions on Dependable and Secure Computing, Early Access, pp 1–17

  11. Xue M, Yuan C, Wu H, Zhang Y, Liu W (2020) Machine learning security: Threats, countermeasures, and evaluations. IEEE Access 8:74720–74742

    Article  Google Scholar 

  12. Chen X, Liu C, Li B, Lu K, Song D (2017) Targeted backdoor attacks on deep learning systems using data poisoning. arXiv:1712.05526

  13. Wenger E, Passananti J, Yao Y, Zheng H, Zhao BY (2020) Backdoor attacks on facial recognition in the physical world. arXiv:2006.14580

  14. Sarkar E, Benkraouda H, Maniatakos M (2020) FaceHack: Triggering backdoored facial recognition systems using facial characteristics. arXiv:2006.11623

  15. Faceapp (2017).

  16. He C, Xue M, Wang J, Liu W (2020) Embedding backdoors as the facial features: Invisible backdoor attacks against face recognition systems. In: Proceedings of ACM turing celebration conference - China, pp 231–235

  17. Sun Y, Wang X, Tang X (2014) Deep learning face representation from predicting 10,000 classes. In: IEEE conference on computer vision and pattern recognition, pp 1891–1898

  18. Parkhi O M, Vedaldi A, Zisserman A (2015) Deep face recognition. In: Proceedings of the British machine vision conference, pp 1–12

  19. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: From error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  Google Scholar 

  20. Wen Z, Zhu W, Ouyang J, Liu P, Du Y, Zhang M, Gao J (2011) A robust and discriminative image perceptual hash algorithm. In: Proceedings of the international conference on genetic and evolutionary computing, pp 709–712

  21. Rakin AS, He Z, Fan D (2020) TBT: Targeted neural network attack with bit Trojan. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–10

  22. Zhong H, Liao C, Squicciarini AC, Zhu S, Miller DJ (2020) Backdoor embedding in convolutional neural network models via invisible perturbation. In: Proceedings of the ACM conference on data and application security and privacy, pp 97–108

  23. Wang S, Nepal S, Rudolph C, Grobler M, Chen S, Chen T (2020) Backdoor attacks against transfer learning with pre-trained deep learning models. arXiv:2001.03274

  24. Chen B, Carvalho W, Baracaldo N, Ludwig H, Edwards B, Lee T, Molloy I, Srivastava B (2019) Detecting backdoor attacks on deep neural networks by activation clustering. In: Proceedings of the 33th AAAI conference on artificial intelligence, pp 1–10

  25. Liu K, Dolan-Gavitt B, Garg S (2018) Fine-pruning: Defending against backdooring attacks on deep neural networks. In: Proceedings of the 21st international symposium on attacks, intrusions and defenses, pp 273–294

  26. Wang B, Yao Y, Shan S, Li H, Viswanath B, Zheng H, Zhao B Y (2019) Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In: IEEE symposium on security and privacy, pp 707–723

  27. Udeshi S, Peng S, Woo G, Loh L, Rawshan L, Chattopadhyay S (2020) FaceHack: Triggering backdoored facial recognition systems using facial characteristics. arXiv:2006.11623

  28. Gao Y, Xu C, Wang D, Chen S, Ranasinghe DC, Nepal S (2019) STRIP: A defence against Trojan attacks on deep neural networks. In: Proceedings of the 35th annual computer security applications conference, pp 113–125

  29. Arsenovic M, Sladojevic S, Anderla A, Stefanovic D (2017) Facetime: Deep learning based face recognition attendance system. In: IEEE 15th international symposium on intelligent systems and informatics, pp 53–58

  30. Insider threats as the main security threat in 2017 (2017).

  31. The insider versus the outsider: Who poses the biggest security risk? (2015).

  32. King DE (2009) Dlib-ml: A machine learning toolkit. J Mach Learn Res 10:1755–1758

    Google Scholar 

  33. Wolf L, Hassner T, Maoz I (2011) Face recognition in unconstrained videos with matched background similarity. In: IEEE conference on computer vision and pattern recognition, pp 529– 534

  34. MTLab (2020).

  35. Xi G, Zhao X, Liu Y, Huang J, Deng Y (2019) A hierarchical ensemble learning framework for energy-efficient automatic train driving. Tsinghua Sci Technol 24(2):226–237

    Article  Google Scholar 

  36. Yao Y, Li H, Zheng H, Zhao BY (2019) Latent backdoor attacks on deep neural networks. In: Proceedings of the ACM SIGSAC conference on computer and communications security, pp 2041–2055

  37. Doan BG, Abbasnejad E, Ranasinghe DC (2019) Februus: Input purification defense against Trojan attacks on deep neural network systems. arXiv:1908.03369

Download references


This work is supported by the National Natural Science Foundation of China (No. 61602241).

Author information

Authors and Affiliations


Corresponding author

Correspondence to Mingfu Xue.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article belongs to the Topical Collection: Special Issue on Privacy-Preserving Computing

Guest Editors: Kaiping Xue, Zhe Liu, Haojin Zhu, Miao Pan and David S.L. Wei

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Xue, M., He, C., Wang, J. et al. Backdoors hidden in facial features: a novel invisible backdoor attack against face recognition systems. Peer-to-Peer Netw. Appl. 14, 1458–1474 (2021).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


  • Artificial intelligence security
  • Backdoor attacks
  • Deep learning
  • Face recognition systems
  • Biometric feature recognition