Advertisement

Right for the Right Reason: Training Agnostic Networks

  • Sen Jia
  • Thomas Lansdall-WelfareEmail author
  • Nello Cristianini
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11191)

Abstract

We consider the problem of a neural network being requested to classify images (or other inputs) without making implicit use of a “protected concept”, that is a concept that should not play any role in the decision of the network. Typically these concepts include information such as gender or race, or other contextual information such as image backgrounds that might be implicitly reflected in unknown correlations with other variables, making it insufficient to simply remove them from the input features. In other words, making accurate predictions is not good enough if those predictions rely on information that should not be used: predictive performance is not the only important metric for learning systems. We apply a method developed in the context of domain adaptation to address this problem of “being right for the right reason”, where we request a classifier to make a decision in a way that is entirely ‘agnostic’ to a given protected concept (e.g. gender, race, background etc.), even if this could be implicitly reflected in other attributes via unknown correlations. After defining the concept of an ‘agnostic model’, we demonstrate how the Domain-Adversarial Neural Network can remove unwanted information from a model using a gradient reversal layer.

Keywords

Agnostic models Explainable AI Fairness in AI Trust 

Notes

Acknowledgements

SJ, TLW and NC are support by the FP7 Ideas: European Research Council Grant 339365 - ThinkBIG.

References

  1. 1.
    Ben-David, S., Blitzer, J., Crammer, K., Pereira, F.:. Analysis of representations for domain adaptation. In: Advances in Neural Information Processing Systems, pp. 137–144 (2007)Google Scholar
  2. 2.
    Caliskan, A., Bryson, J.J., Narayanan, A.: Semantics derived automatically from language corpora contain human-like biases. Science 356(6334), 183–186 (2017)CrossRefGoogle Scholar
  3. 3.
    Chu, W., Cai, D.: Deep feature based contextual model for object detection. Neurocomputing 275, 1035–1042 (2018)CrossRefGoogle Scholar
  4. 4.
    Cristianini, N.: On the current paradigm in artificial intelligence. AI Communications 27(1), 37–43 (2014)MathSciNetGoogle Scholar
  5. 5.
    Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition, 2009. CVPR 2009, pp. 248–255. IEEE (2009)Google Scholar
  6. 6.
    Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., Lempitsky, V.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17(1), 2030–2096 (2016)MathSciNetzbMATHGoogle Scholar
  7. 7.
    Girshick, R.B., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. CoRR, arXiv:abs/1311.2524 (2013)
  8. 8.
    Halevy, A., Norvig, P., Pereira, F.: The unreasonable effectiveness of data. IEEE Intell. Syst. 24(2), 8–12 (2009)CrossRefGoogle Scholar
  9. 9.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385 (2015)
  10. 10.
    Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. Technical report, Technical Report 07–49, University of Massachusetts, Amherst (2007)Google Scholar
  11. 11.
    Jia, S., Lansdall-Welfare, T., Cristianini, N.: Gender classification by deep learning on millions of weakly labelled images. In: 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW), pp. 462–467. IEEE (2016)Google Scholar
  12. 12.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 25, pp. 1097–1105. Curran Associates Inc. (2012)Google Scholar
  13. 13.
    Li, J., Wei, Y., Liang, X., Dong, J., Tingfa, X., Feng, J., Yan, S.: Attentive contexts for object detection. IEEE Trans. Multimed. 19(5), 944–954 (2017)CrossRefGoogle Scholar
  14. 14.
    Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)Google Scholar
  15. 15.
    Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you? Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM (2016)Google Scholar
  16. 16.
    Russakovsky, O., Deng, J., Hao, S., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. (IJCV) 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Eprint Arxiv (2014)Google Scholar
  18. 18.
    Wulfmeier, M., Bewley, A., Posner, I.: Addressing appearance change in outdoor robotics with adversarial domain adaptation. arXiv preprint arXiv:1703.01461 (2017)
  19. 19.
    Zeiler, M.D., Fergus, R.: Visualizing and Understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10590-1_53CrossRefGoogle Scholar
  20. 20.
    Zhao, J., Wang, T., Yatskar, M., Ordonez, V., Chang, K.-W.: Men also like shopping: Reducing gender bias amplification using corpus-level constraints. arXiv preprint arXiv:1707.09457 (2017)
  21. 21.
    Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: A 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Sen Jia
    • 1
  • Thomas Lansdall-Welfare
    • 1
    Email author
  • Nello Cristianini
    • 1
  1. 1.Intelligent Systems LaboratoryUniversity of BristolBristolUK

Personalised recommendations