Advertisement

A Hybrid Model for Identity Obfuscation by Face Replacement

  • Qianru SunEmail author
  • Ayush Tewari
  • Weipeng Xu
  • Mario Fritz
  • Christian Theobalt
  • Bernt Schiele
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11205)

Abstract

As more and more personal photos are shared and tagged in social media, avoiding privacy risks such as unintended recognition, becomes increasingly challenging. We propose a new hybrid approach to obfuscate identities in photos by head replacement. Our approach combines state of the art parametric face synthesis with latest advances in Generative Adversarial Networks (GAN) for data-driven image synthesis. On the one hand, the parametric part of our method gives us control over the facial parameters and allows for explicit manipulation of the identity. On the other hand, the data-driven aspects allow for adding fine details and overall realism as well as seamless blending into the scene context. In our experiments we show highly realistic output of our system that improves over the previous state of the art in obfuscation rate while preserving a higher similarity to the original image content.

Notes

Acknowledgments

This research was supported in part by German Research Foundation (DFG CRC 1223) and the ERC Starting Grant CapReal (335545). We thank Dr. Florian Bernard for the helpful discussions.

Supplementary material

474172_1_En_34_MOESM1_ESM.pdf (1.4 mb)
Supplementary material 1 (pdf 1426 KB)

References

  1. 1.
    Sun, Q., Ma, L., Oh, S.J., Gool, L.V., Schiele, B., Fritz, M.: Natural and effective obfuscation by head inpainting. In: CVPR (2018)Google Scholar
  2. 2.
    Oh, S.J., Fritz, M., Schiele, B.: Adversarial image perturbation for privacy protection - a game theory perspective. In: ICCV (2017)Google Scholar
  3. 3.
    Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (2016)Google Scholar
  4. 4.
    Tewari, A., et al.: MoFA: model-based deep convolutional face autoencoder for unsupervised monocular reconstruction. In: ICCV, vol. 2 (2017)Google Scholar
  5. 5.
    Thies, J., Zollhöfer, M., Stamminger, M., Theobalt, C., Nießner, M.: Face2Face: real-time face capture and reenactment of RGB videos. In: CVPR (2016)Google Scholar
  6. 6.
    Garrido, P., et al.: Reconstruction of personalized 3D face rigs from monocular video. ACM Trans. Graph. 35(3), 28:1–28:15 (2016). (Presented at SIGGRAPH 2016)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Oh, S.J., Benenson, R., Fritz, M., Schiele, B.: Faceless person recognition: privacy implications in social media. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 19–35. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46487-9_2CrossRefGoogle Scholar
  8. 8.
    McPherson, R., Shokri, R., Shmatikov, V.: Defeating image obfuscation with deep learning. arXiv:1609.00408 (2016)
  9. 9.
    Brkic, K., Sikiric, I., Hrkac, T., Kalafatic, Z.: I know that person: generative full body and face de-identification of people in images. In: CVPR Workshops, pp. 1319–1328 (2017)Google Scholar
  10. 10.
    Blanz, V., Vetter, T.: A morphable model for the synthesis of 3D faces. In: SIGGRAPH. ACM Press/Addison-Wesley Publishing Co., pp. 187–194 (1999)Google Scholar
  11. 11.
    Booth, J., Roussos, A., Zafeiriou, S., Ponniah, A., Dunaway, D.: A 3D morphable model learnt from 10,000 faces. In: CVPR (2016)Google Scholar
  12. 12.
    Booth, J., Antonakos, E., Ploumpis, S., Trigeorgis, G., Panagakis, Y., Zafeiriou, S.: 3D face morphable model “in-the-wild”. In: CVPR (2017)Google Scholar
  13. 13.
    Tewari, A., et al.: Self-supervised multi-level face model learning for monocular reconstruction at over 250 hz. In: CVPR (2018)Google Scholar
  14. 14.
    Roth, J., Tong, Y., Liu, X.: Adaptive 3D face reconstruction from unconstrained photo collections. IEEE Trans. Pattern Anal. Mach. Intell. 39(11), 2127–2141 (2017)CrossRefGoogle Scholar
  15. 15.
    Romdhani, S., Vetter, T.: Estimating 3D shape and texture using pixel intensity, edges, specular highlights, texture constraints and a prior. In: CVPR (2005)Google Scholar
  16. 16.
    Garrido, P., Valgaxerts, L., Wu, C., Theobalt, C.: Reconstructing detailed dynamic face geometry from monocular video. ACM Trans. Graph. 32, 158:1–158:10 (2013). (Proceedings of SIGGRAPH Asia 2013)CrossRefGoogle Scholar
  17. 17.
    Shi, F., Wu, H.T., Tong, X., Chai, J.: Automatic acquisition of high-fidelity facial performances using monocular videos. ACM Trans. Graph. (TOG) 33(6), 222 (2014)CrossRefGoogle Scholar
  18. 18.
    Richardson, E., Sela, M., Kimmel, R.: 3D face reconstruction by learning from synthetic data. In: 3DV (2016)Google Scholar
  19. 19.
    Sela, M., Richardson, E., Kimmel, R.: Unrestricted facial geometry reconstruction using image-to-image translation. In: ICCV (2017)Google Scholar
  20. 20.
    Tran, A.T., Hassner, T., Masi, I., Medioni, G.G.: Regressing robust and discriminative 3D morphable models with a very deep neural network. In: CVPR (2017)Google Scholar
  21. 21.
    Richardson, E., Sela, M., Or-El, R., Kimmel, R.: Learning detailed face reconstruction from a single image. In: CVPR (2017)Google Scholar
  22. 22.
    Dou, P., Shah, S.K., Kakadiaris, I.A.: End-to-end 3D face reconstruction with deep neural networks. In: CVPR (2017)Google Scholar
  23. 23.
    Kim, H., Zollöfer, M., Tewari, A., Thies, J., Richardt, C., Christian, T.: InverseFaceNet: Deep Single-Shot Inverse Face Rendering From A Single Image. arXiv:1703.10956 (2017)
  24. 24.
    Cao, C., Bradley, D., Zhou, K., Beeler, T.: Real-time high-fidelity facial performance capture. ACM Trans. Graph. 34(4), 46:1–46:9 (2015)CrossRefGoogle Scholar
  25. 25.
    Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., Webb, R.: Learning from simulated and unsupervised images through adversarial training. In: CVPR, pp. 2242–2251 (2017)Google Scholar
  26. 26.
    Mueller, F., et al.: Ganerated hands for real-time 3D hand tracking from monocular RGB. In: CVPR (2018)Google Scholar
  27. 27.
    Yeh, R., Chen, C., Lim, T., Hasegawa-Johnson, M., Do, M.N.: Semantic image inpainting with perceptual and contextual losses. arXiv:1607.07539 (2016)
  28. 28.
    Pathak, D., Krähenbühl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: CVPR (2016)Google Scholar
  29. 29.
    Alexander, O., Rogers, M., Lambeth, W., Chiang, M., Debevec, P.: The Digital Emily Project: photoreal facial modeling and animation. In: ACM SIGGRAPH Courses, pp. 12:1–12:15. ACM (2009)Google Scholar
  30. 30.
    Cao, C., Weng, Y., Zhou, S., Tong, Y., Zhou, K.: Facewarehouse: a 3D facial expression database for visual computing. IEEE Trans. Vis. Comput. Graph. 20(3), 413–425 (2014)CrossRefGoogle Scholar
  31. 31.
    Ramamoorthi, R., Hanrahan, P.: A signal-processing framework for inverse rendering. In: SIGGRAPH, pp. 117–128. ACM (2001)Google Scholar
  32. 32.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)Google Scholar
  33. 33.
    King, D.E.: Dlib-ml: a machine learning toolkit. J. Mach. Learn. Res. 10, 1755–1758 (2009)Google Scholar
  34. 34.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  35. 35.
    Ma, L., Jia, X., Sun, Q., Schiele, B., Tuytelaars, T., Gool, L.V.: Pose guided person image generation. In: NIPS, pp. 405–415 (2017)Google Scholar
  36. 36.
    Ma, L., Sun, Q., Georgoulis, S., Gool, L.V., Schiele, B., Fritz, M.: Disentangled person image generation. In: CVPR (2018)Google Scholar
  37. 37.
    Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. In: ICLR (2016)Google Scholar
  38. 38.
    Mirjalili, V., Raschka, S., Namboodiri, A.M., Ross, A.: Semi-adversarial networks: convolutional autoencoders for imparting privacy to face images. In: 2018 International Conference on Biometrics, ICB 2018, Gold Coast, Australia, 20–23 February 2018, pp. 82–89 (2018)Google Scholar
  39. 39.
    Oh, S.J., Benenson, R., Fritz, M., Schiele, B.: Person recognition in personal photo collections. In: ICCV (2015)Google Scholar
  40. 40.
    Oh, S.J., Benenson, R., Fritz, M., Schiele, B.: Person recognition in social media photos. arXiv:1710.03224 (2017)
  41. 41.
    Zhang, N., Paluri, M., Taigman, Y., Fergus, R., Bourdev, L.D.: Beyond frontal faces: Improving person recognition using multiple cues. In: CVPR (2015)Google Scholar
  42. 42.
    Sun, Q., Schiele, B., Fritz, M.: A domain based approach to social relation recognition. In: CVPR (2017)Google Scholar
  43. 43.
    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)CrossRefGoogle Scholar
  44. 44.
    Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and locally consistent image completion. ACM Trans. Graph. 36(4), 107:1–107:14 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Qianru Sun
    • 1
    Email author
  • Ayush Tewari
    • 1
  • Weipeng Xu
    • 1
  • Mario Fritz
    • 1
  • Christian Theobalt
    • 1
  • Bernt Schiele
    • 1
  1. 1.Max Planck Institute for InformaticsSaarbrückenGermany

Personalised recommendations