Advertisement

Face Inpainting with Dynamic Structural Information of Facial Action Units

  • Le Li
  • Zhilei LiuEmail author
  • Cuicui Zhang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11903)

Abstract

In recent years, the deep learning based face inpainting methods have achieved some promising results, mainly related to the generative adversarial networks (GAN). Based on a large number of training samples, GAN generates a false true from a known training sample, and cannot use the trained parameters to generate image other than training samples. In addition, most of the previous works did not take into account high-level facial structure information, such as the co-existence of facial action units. In order to better to exploit facial structural knowledge, this paper proposes a method that combines prior knowledge based on high-level dynamic structural information of facial action units and GAN to complete face images. We primarily validate the effectiveness of our approach in facial expression restoration during face inpainting on the two datasets of BP4D and DISFA.

Keywords

Face inpainting Facial action units GAN 

Notes

Acknowledgements

This work is supported by the National Natural Science Foundation of China under Grants of 41806116 and 61503277.

References

  1. 1.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  2. 2.
    Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  3. 3.
    Hays, J., Efros, A.A.: Scene completion using millions of photographs. ACM Trans. Graph. (TOG) 26(3), 4 (2007)CrossRefGoogle Scholar
  4. 4.
    Oliva, A., Torralba, A.: Building the gist of a scene: the role of global image features in recognition. Prog. Brain Res. 155, 23–36 (2006)CrossRefGoogle Scholar
  5. 5.
    Yeh, R.A., Chen, C., Yian Lim, T., et al.: Semantic image inpainting with deep generative models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5485–5493 (2017)Google Scholar
  6. 6.
    Pathak, D., Krahenbuhl, P., Donahue, J., et al.: Context encoders: feature learning by inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536–2544 (2016)Google Scholar
  7. 7.
    Ekman, R.: What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). Oxford University Press, USA (1997)Google Scholar
  8. 8.
    Yu, J., Lin, Z., Yang, J., et al.: Generative image inpainting with contextual attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5505–5514 (2018)Google Scholar
  9. 9.
    Ledig, C., Theis, L., Huszár, F., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681–4690 (2017)Google Scholar
  10. 10.
    Chen, Y., Tai, Y., Liu, X., et al.: Fsrnet: end-to-end learning face super-resolution with facial priors. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2492–2501 (2018)Google Scholar
  11. 11.
    Chang, H., Lu, J., Yu, F., et al.: Pairedcyclegan: asymmetric style transfer for applying and removing makeup. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 40–48 (2018)Google Scholar
  12. 12.
    Huang, R., Zhang, S., Li, T., et al.: Beyond face rotation: global and local perception GAN for photorealistic and identity preserving frontal view synthesis. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2439–2448 (2017)Google Scholar
  13. 13.
    Li, Y., Liu, S., Yang, J., et al.: Generative face completion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3911–3919 (2017)Google Scholar
  14. 14.
    Zhang, S., He, R., Sun, Z., et al.: DeMeshNet: Blind Face Inpainting for Deep MeshFace Verification (2018)Google Scholar
  15. 15.
    Song, L., Cao, J., Song, L., et al.: Geometry-aware face completion and editing. arXiv preprint arXiv:1809.02967 (2018)
  16. 16.
    Han, S., Meng, Z., Khan, A.S., et al.: Incremental boosting convolutional neural network for facial action unit recognition. In: Advances in Neural Information Processing Systems, pp. 109–117 (2016)Google Scholar
  17. 17.
    Li, W., Abtahi, F., Zhu, Z., et al.: Eac-net: deep nets with enhancing and cropping for facial action unit detection. IEEE Trans. Pattern Anal. Mach. Intell. 40(11), 2583–2596 (2018)CrossRefGoogle Scholar
  18. 18.
    Zhao, K., Chu, W.S., Zhang, H.: Deep region and multi-label learning for facial action unit detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3391–3399 (2016)Google Scholar
  19. 19.
    Peng, G., Wang, S.: Weakly supervised facial action unit recognition through adversarial training. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2188–2196 (2018)Google Scholar
  20. 20.
    Zhang, X., Yin, L., Cohn, J.F., et al.: A high-resolution spontaneous 3D dynamic facial expression database. In: 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1–6. IEEE (2013)Google Scholar
  21. 21.
    Mavadati, S.M., Mahoor, M.H., Bartlett, K., et al.: Disfa: a spontaneous facial action intensity database. IEEE Trans. Affect. Comput. 4(2), 151–160 (2013)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.College of Intelligence and ComputingTianjin UniversityTianjinChina
  2. 2.School of Marine Science and TechnologyTianjin UniversityTianjinChina

Personalised recommendations