Advertisement

Edge-Aware Graph Representation Learning and Reasoning for Face Parsing

Conference paper
  • 968 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12357)

Abstract

Face parsing infers a pixel-wise label to each facial component, which has drawn much attention recently. Previous methods have shown their efficiency in face parsing, which however overlook the correlation among different face regions. The correlation is a critical clue about the facial appearance, pose, expression, etc., and should be taken into account for face parsing. To this end, we propose to model and reason the region-wise relations by learning graph representations, and leverage the edge information between regions for optimized abstraction. Specifically, we encode a facial image onto a global graph representation where a collection of pixels (“regions”) with similar features are projected to each vertex. Our model learns and reasons over relations between the regions by propagating information across vertices on the graph. Furthermore, we incorporate the edge information to aggregate the pixel-wise features onto vertices, which emphasizes on the features around edges for fine segmentation along edges. The finally learned graph representation is projected back to pixel grids for parsing. Experiments demonstrate that our model outperforms state-of-the-art methods on the widely used Helen dataset, and also exhibits the superior performance on the large-scale CelebAMask-HQ and LaPa dataset. The code is available at https://github.com/tegusi/EAGRNet.

Keywords

Face parsing Graph representation Attention mechanism Graph reasoning 

Notes

Acknowledgement

This work was supported by National Natural Science Foundation of China [61972009], Beijing Natural Science Foundation [4194080] and Beijing Academy of Artificial Intelligence (BAAI).

References

  1. 1.
    Lee, C.H., Liu, Z., Wu, L., Luo, P.: MaskGAN: towards diverse and interactive facial image manipulation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5549–5558 (2020)Google Scholar
  2. 2.
    Zhang, H., Riggan, B.S., Hu, S., Short, N.J., Patel, V.M.: Synthesis of high-quality visible faces from polarimetric thermal faces using generative adversarial networks. Int. J. Comput. Vis. 127, 1–18 (2018)Google Scholar
  3. 3.
    Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Sig. Process. Lett. 23(10), 1499–1503 (2016)CrossRefGoogle Scholar
  4. 4.
    Lin, J., Yang, H., Chen, D., Zeng, M., Wen, F., Yuan, L.: Face parsing with ROI tanh-warping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5654–5663 (2019)Google Scholar
  5. 5.
    Yin, Z., Yiu, V., Hu, X., Tang, L.: End-to-end face parsing via interlinked convolutional neural networks. arXiv preprint arXiv:2002.04831 (2020)
  6. 6.
    Liu, S., Shi, J., Liang, J., Yang, M.H.: Face parsing via recurrent propagation. In: 28th British Machine Vision Conference, BMVC 2017, pp. 1–10 (2017)Google Scholar
  7. 7.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 770–778 (2016)Google Scholar
  8. 8.
    Henaff, M., Bruna, J., LeCun, Y.: Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163 (2015)
  9. 9.
    Defferrard, M., Bresson, X., Vandergheynst, P.: Convolutional neural networks on graphs with fast localized spectral filtering. In: Advances in Neural Information Processing Systems, pp. 3844–3852 (2016)Google Scholar
  10. 10.
    Smith, B.M., Zhang, L., Brandt, J., Lin, Z., Yang, J.: Exemplar-based face parsing. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3484–3491 (2013)Google Scholar
  11. 11.
    Warrell, J., Prince, S.J.: Labelfaces: parsing facial features by multiclass labeling with an epitome prior. In: IEEE International Conference on Image Processing (ICIP), pp. 2481–2484 (2009)Google Scholar
  12. 12.
    Kae, A., Sohn, K., Lee, H., Learned-Miller, E.: Augmenting CRFs with Boltzmann machine shape priors for image labeling. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2019–2026 (2013)Google Scholar
  13. 13.
    Liu, S., Yang, J., Huang, C., Yang, M.H.: Multi-objective convolutional learning for face labeling. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3451–3459 (2015)Google Scholar
  14. 14.
    Luo, P., Wang, X., Tang, X.: Hierarchical face parsing via deep learning. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2480–2487 (2012)Google Scholar
  15. 15.
    Zhou, E., Fan, H., Cao, Z., Jiang, Y., Yin, Q.: Extensive facial landmark localization with coarse-to-fine convolutional network cascade. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 386–391 (2013)Google Scholar
  16. 16.
    Zhou, Y., Hu, X., Zhang, B.: Interlinked convolutional neural networks for face parsing. In: Hu, X., Xia, Y., Zhang, Y., Zhao, D. (eds.) ISNN 2015. Lecture Notes in Computer Science, vol. 9377, pp. 222–231. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-25393-0_25CrossRefGoogle Scholar
  17. 17.
    Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
  18. 18.
    Vaswani, A., et al.: Attention is all you need. In: Advances in neural information processing systems, pp. 5998–6008 (2017)Google Scholar
  19. 19.
    Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803 (2018)Google Scholar
  20. 20.
    Chen, Y., Kalantidis, Y., Li, J., Yan, S., Feng, J.: A 2-nets: double attention networks. In: Advances in Neural Information Processing Systems, pp. 352–361 (2018)Google Scholar
  21. 21.
    Zhao, H., et al.: PSANet: point-wise spatial attention network for scene parsing. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 267–283 (2018)Google Scholar
  22. 22.
    Zhu, Z., Xu, M., Bai, S., Huang, T., Bai, X.: Asymmetric non-local neural networks for semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 593–602 (2019)Google Scholar
  23. 23.
    Fu, J., et al.: Dual attention network for scene segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3146–3154 (2019)Google Scholar
  24. 24.
    Chen, Y., Rohrbach, M., Yan, Z., Shuicheng, Y., Feng, J., Kalantidis, Y.: Graph-based global reasoning networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 433–442 (2019)Google Scholar
  25. 25.
    Li, X., Zhong, Z., Wu, J., Yang, Y., Lin, Z., Liu, H.: Expectation-maximization attention networks for semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 9167–9176 (2019)Google Scholar
  26. 26.
    Chandra, S., Usunier, N., Kokkinos, I.: Dense and low-rank gaussian CRFs using deep embeddings. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5103–5112 (2017)Google Scholar
  27. 27.
    Li, Y., Gupta, A.: Beyond grids: learning graph representations for visual recognition. In: Advances in Neural Information Processing Systems, pp. 9225–9235 (2018)Google Scholar
  28. 28.
    Lu, Y., Chen, Y., Zhao, D., Chen, J.: Graph-FCN for image semantic segmentation. In: Lu, H., Tang, H., Wang, Z. (eds.) Advances in Neural Networks – ISNN 2019, ISNN 2019. Lecture Notes in Computer Science, vol. 11554, pp. 97–105. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-22796-8_11CrossRefGoogle Scholar
  29. 29.
    Pourian, N., Karthikeyan, S., Manjunath, B.S.: Weakly supervised graph based semantic segmentation by learning communities of image-parts. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1359–1367 (2015)Google Scholar
  30. 30.
    Te, G., Hu, W., Guo, Z.: Exploring hypergraph representation on face anti-spoofing beyond 2D attacks. In: 2020 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2020)Google Scholar
  31. 31.
    Zhang, L., Li, X., Arnab, A., Yang, K., Tong, Y., Torr, P.H.: Dual graph convolutional network for semantic segmentation. arXiv preprint arXiv:1909.06121 (2019)
  32. 32.
    Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: 5th International Conference on Learning Representations, Conference Track Proceedings, OpenReview.net, ICLR 2017, Toulon, France, 24–26 April 2017 (2017)Google Scholar
  33. 33.
    Liu, Y., Shi, H., Shen, H., Si, Y., Wang, X., Mei, T.: A new dataset and boundary-attention semantic segmentation for face parsing. AAA I, 11637–11644 (2020)Google Scholar
  34. 34.
    Le, V., Brandt, J., Lin, Z., Bourdev, L., Huang, T.S.: Interactive facial feature localization. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) Computer Vision – ECCV 2012, ECCV 2012. Lecture Notes in Computer Science, vol. 7574, pp. 679–692. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33712-3_49CrossRefGoogle Scholar
  35. 35.
    Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2881–2890 (2017)Google Scholar
  36. 36.
    Ruan, T., Liu, T., Huang, Z., Wei, Y., Wei, S., Zhao, Y.: Devil in the details: towards accurate single and multiple human parsing. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 4814–4821 (2019)Google Scholar
  37. 37.
    Rota Bulò, S., Porzi, L., Kontschieder, P.: In-place activated BatchNorm for memory-optimized training of DNNs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018)Google Scholar
  38. 38.
    Wei, Z., Liu, S., Sun, Y., Ling, H.: Accurate facial image parsing at real-time speed. IEEE Trans. Image Process. 28(9), 4659–4670 (2019)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Wangxuan Institute of Computer TechnologyPeking UniversityBeijingChina
  2. 2.JD AI ResearchBeijingChina

Personalised recommendations