Open-Set Adversarial Defense

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12362)


Open-set recognition and adversarial defense study two key aspects of deep learning that are vital for real-world deployment. The objective of open-set recognition is to identify samples from open-set classes during testing, while adversarial defense aims to defend the network against images with imperceptible adversarial perturbations. In this paper, we show that open-set recognition systems are vulnerable to adversarial attacks. Furthermore, we show that adversarial defense mechanisms trained on known classes do not generalize well to open-set samples. Motivated by this observation, we emphasize the need of an Open-Set Adversarial Defense (OSAD) mechanism. This paper proposes an Open-Set Defense Network (OSDN) as a solution to the OSAD problem. The proposed network uses an encoder with feature-denoising layers coupled with a classifier to learn a noise-free latent feature representation. Two techniques are employed to obtain an informative latent feature space with the objective of improving open-set performance. First, a decoder is used to ensure that clean images can be reconstructed from the obtained latent features. Then, self-supervision is used to ensure that the latent features are informative enough to carry out an auxiliary task. We introduce a testing protocol to evaluate OSAD performance and show the effectiveness of the proposed method in multiple object classification datasets. The implementation code of the proposed method is available at:


Adversarial defense Open-set recognition 



This work is partially supported by Research Grants Council (RGC/HKBU12200518), Hong Kong. Vishal M. Patel was supported by the DARPA GARD Program HR001119S0026-GARD-FP-052.

Supplementary material

504472_1_En_40_MOESM1_ESM.pdf (902 kb)
Supplementary material 1 (pdf 902 KB)


  1. 1.
    Alex Krizhevsky, V.N., Hinton, G.: CIFAR-10 (Canadian Institute For Advanced Research)Google Scholar
  2. 2.
    Baweja, Y., Oza, P., Perera, P., Patel, V.M.: Anomaly detection-based unknown face presentation attack detection. In: IJCB (2020)Google Scholar
  3. 3.
    Bendale, A., Boult, T.E.: Towards open set deep networks. In: CVPR (2016)Google Scholar
  4. 4.
    Buades, A., Coll, B., Morel, J.M.: A non-local algorithm for image denoising. In: CVPR (2005)Google Scholar
  5. 5.
    Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: SP (2017)Google Scholar
  6. 6.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR (2009)Google Scholar
  7. 7.
    Doersch, C., Gupta, A., Efros, A.A.: Unsupervised visual representation learning by context prediction. In: ICCV (2015)Google Scholar
  8. 8.
    Doersch, C., Zisserman, A.: Multi-task self-supervised visual learning. In: ICCV (2017)Google Scholar
  9. 9.
    Evtimov, I., et al.: Robust physical-world attacks on deep learning models. In: CVPR (2018)Google Scholar
  10. 10.
    Ge, Z., Demyanov, S., Chen, Z., Garnavi, R.: Generative openmax for multi-class open set classification. In: BMVC (2017)Google Scholar
  11. 11.
    Gidaris, S., Singh, P., Komodakis, N.: Unsupervised representation learning by predicting image rotations. In: ICLR (2018)Google Scholar
  12. 12.
    Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2014)Google Scholar
  13. 13.
    Gupta, P., Rahtu, E.: CIIDefence: defeating adversarial attacks by fusing class-specific image inpainting and image denoising. In: CVPR (2019)Google Scholar
  14. 14.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)Google Scholar
  15. 15.
    Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. In: ICLR (2017)Google Scholar
  16. 16.
    Hendrycks, D., Mazeika, M., Kadavath, S., Song, D.: Using self-supervised learning can improve model robustness and uncertainty. In: NIPS (2019)Google Scholar
  17. 17.
    Jang, Y., Zhao, T., Hong, S., Lee, H.: Adversarial defense via learning to generate diverse attacks. In: ICCV (2019)Google Scholar
  18. 18.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  19. 19.
    Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial machine learning at scale. In: ICLR (2017)Google Scholar
  20. 20.
    Lan, X., Ye, M., Shao, R., Zhong, B., Yuen, P.C., Zhou, H.: Learning modality-consistency feature templates: a robust RGB-infrared tracking system. IEEE Trans. Ind. Electron. 66(12), 9887–9897 (2019)CrossRefGoogle Scholar
  21. 21.
    Liang, S., Li, Y., Srikant, R.: Enhancing the reliability of out-of-distribution image detection in neural networks. In: ICLR (2018)Google Scholar
  22. 22.
    Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J.: Defense against adversarial attacks using high-level representation guided denoiser. In: CVPR (2018)Google Scholar
  23. 23.
    Maaten, L.V.D., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(Nov), 2579–2605 (2008)zbMATHGoogle Scholar
  24. 24.
    Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)Google Scholar
  25. 25.
    Neal, L., Olson, M., Fern, X., Wong, W.K., Li, F.: Open set learning with counterfactual images. In: ECCV (2018)Google Scholar
  26. 26.
    Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning (2011)Google Scholar
  27. 27.
    Oza, P., Nguyen, H.V., Patel, V.M.: Multiple class novelty detection under data distribution shift. In: ECCV (2020)Google Scholar
  28. 28.
    Oza, P., Patel, V.M.: Utilizing patch-level activity patterns for multiple class novelty detection. In: ECCV (2020)Google Scholar
  29. 29.
    Oza, P., Patel, V.M.: One-class convolutional neural network. IEEE Signal Process. Lett. 26(2), 277–281 (2018)CrossRefGoogle Scholar
  30. 30.
    Oza, P., Patel, V.M.: C2AE: Class conditioned auto-encoder for open-set recognition. In: CVPR (2019)Google Scholar
  31. 31.
    Perera, P., Patel, V.M.: Deep transfer learning for multiple class novelty detection. In: CVPR (2019)Google Scholar
  32. 32.
    Perera, P., et al.: Generative-discriminative feature representations for open-set recognition. In: CVPR (2020)Google Scholar
  33. 33.
    Perera, P., Nallapati, R., Xiang, B.: OCGAN: One-class novelty detection using GANs with constrained latent representations. In: CVPR (2019)Google Scholar
  34. 34.
    Perera, P., Patel, V.M.: Learning deep features for one-class classification. IEEE Trans. Image Process. 28(11), 5450–5463 (2019)MathSciNetCrossRefGoogle Scholar
  35. 35.
    Scheirer, W.J., Rocha, A., Sapkota, A., Boult, T.E.: Towards open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (T-PAMI) 35, 1757–1772 (2013)CrossRefGoogle Scholar
  36. 36.
    Shao, R., Lan, X.: Adversarial auto-encoder for unsupervised deep domain adaptation. In: IET Image Processing (2019)Google Scholar
  37. 37.
    Shao, R., Lan, X., Li, J., Yuen, P.C.: Multi-adversarial discriminative deep domain generalization for face presentation attack detection. In: CVPR (2019)Google Scholar
  38. 38.
    Shao, R., Lan, X., Yuen, P.C.: Deep convolutional dynamic texture learning with adaptive channel-discriminability for 3D mask face anti-spoofing. In: IJCB (2017)Google Scholar
  39. 39.
    Shao, R., Lan, X., Yuen, P.C.: Feature constrained by pixel: hierarchical adversarial deep domain adaptation. In: ACM MM (2018)Google Scholar
  40. 40.
    Shao, R., Lan, X., Yuen, P.C.: Joint discriminative learning of deep dynamic textures for 3D mask face anti-spoofing. IEEE Trans. Inf. Forensics Secur. 14(4), 923–938 (2019)CrossRefGoogle Scholar
  41. 41.
    Shao, R., Lan, X., Yuen, P.C.: Regularized fine-grained meta face anti-spoofing. In: AAAI (2020)Google Scholar
  42. 42.
    Shao, R., Perera, P., Yuen, P.C., Patel, V.M.: Federated face anti-spoofing. arXiv preprint arXiv:2005.14638 (2020)
  43. 43.
    Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)Google Scholar
  44. 44.
    Xie, C., Wu, Y., Maaten, L.v.d., Yuille, A.L., He, K.: Feature denoising for improving adversarial robustness. In: CVPR (2019)Google Scholar
  45. 45.
    Ye, M., Shen, J., Lin, G., Xiang, T., Shao, L., Hoi, S.C.H.: Deep learning for person re-identification: a survey and outlook. arXiv preprint arXiv:2001.04193 (2020)
  46. 46.
    Ye, M., Shen, J., Zhang, X., Yuen, P.C., Chang, S.F.: Augmentation invariant and instance spreading feature for softmax embedding. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Google Scholar
  47. 47.
    Ye, M., Zhang, X., Yuen, P.C., Chang, S.F.: Unsupervised embedding learning via invariant and spreading instance feature. In: CVPR (2019)Google Scholar
  48. 48.
    Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., Naemura, T.: Classification-reconstruction learning for open-set recognition. In: CVPR (2019)Google Scholar
  49. 49.
    Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015)
  50. 50.
    Zhang, H., Patel, V.M.: Sparse representation-based open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. 39(8), 1690–1696 (2016)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Computer ScienceHong Kong Baptist UniversityKowloon ToonHong Kong
  2. 2.AWS AI LabsNew YorkUSA
  3. 3.Department of Electrical and Computer EngineeringJohns Hopkins UniversityBaltimoreUSA

Personalised recommendations