Skip to main content

Towards Metrical Reconstruction of Human Faces

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13673))

Included in the following conference series:

Abstract

Face reconstruction and tracking is a building block of numerous applications in AR/VR, human-machine interaction, as well as medical applications. Most of these applications rely on a metrically correct prediction of the shape, especially, when the reconstructed subject is put into a metrical context (i.e., when there is a reference object of known size). A metrical reconstruction is also needed for any application that measures distances and dimensions of the subject (e.g., to virtually fit a glasses frame). State-of-the-art methods for face reconstruction from a single image are trained on large 2D image datasets in a self-supervised fashion. However, due to the nature of a perspective projection they are not able to reconstruct the actual face dimensions, and even predicting the average human face outperforms some of these methods in a metrical sense. To learn the actual shape of a face, we argue for a supervised training scheme. Since there exists no large-scale 3D dataset for this task, we annotated and unified small- and medium-scale databases. The resulting unified dataset is still a medium-scale dataset with more than 2k identities and training purely on it would lead to overfitting. To this end, we take advantage of a face recognition network pretrained on a large-scale 2D image dataset, which provides distinct features for different faces and is robust to expression, illumination, and camera changes. Using these features, we train our face shape estimator in a supervised fashion, inheriting the robustness and generalization of the face recognition network. Our method, which we call MICA  (MetrIC fAce), outperforms the state-of-the-art reconstruction methods by a large margin, both on current non-metric benchmarks as well as on our metric benchmarks (15% and 24% lower average error on NoW, respectively). Project website:https://zielon.github.io/mica/

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abrevaya, V.F., Boukhayma, A., Torr, P.H., Boyer, E.: Cross-modal deep face normals with deactivable skip connections. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4978–4988 (2020)

    Google Scholar 

  2. An, X., et al.: Partial fc: Training 10 million identities on a single machine. In: Arxiv 2010.05222 (2020)

    Google Scholar 

  3. Bagdanov, A.D., Del Bimbo, A., Masi, I.: The florence 2D/3D hybrid face dataset. In: Proceedings of the 2011 Joint ACM Workshop on Human Gesture and Behavior Understanding, J-HGBU 2011, pp. 79–80. Association for Computing Machinery, New York, NY, USA (2011). https://doi.org/10.1145/2072572.2072597, https://doi.org/10.1145/2072572.2072597

  4. Bas, A., Smith, W.A.P.: What does 2D geometric information really tell us about 3D face shape? Int. J. Comput. Visi. 127(10), 1455–1473 (2019)

    Article  Google Scholar 

  5. Besl, P.J., McKay, N.D.: Method for registration of 3-D shapes. In: Sensor Fusion IV: Control Paradigms and Data Structures, vol. 1611, pp. 586–606. International Society for Optics and Photonics, Bellingham (1992)

    Google Scholar 

  6. Blanz, V., Basso, C., Poggio, T., Vetter, T.: Reanimating faces in images and video. In: EUROGRAPHICS (EG), vol. 22, pp. 641–650 (2003)

    Google Scholar 

  7. Blanz, V., Scherbaum, K., Vetter, T., Seidel, H.P.: Exchanging faces in images. Comput. Graph. Forum 23(3), 669–676 (2004)

    Article  Google Scholar 

  8. Blanz, V., Vetter, T.: A morphable model for the synthesis of 3D faces. In: SIGGRAPH, pp. 187–194 (1999)

    Google Scholar 

  9. Bulat, A., Tzimiropoulos, G.: How far are we from solving the 2D & 3D face alignment problem? (and a dataset of 230,000 3d facial landmarks). In: International Conference on Computer Vision (2017)

    Google Scholar 

  10. Cao, C., Weng, Y., Zhou, S., Tong, Y., Zhou, K.: FaceWarehouse: a 3D facial expression database for visual computing. Trans. Visual. Comput. Graph. 20, 413–425 (2013)

    Google Scholar 

  11. Chang, F.J., Tran, A.T., Hassner, T., Masi, I., Nevatia, R., Medioni, G.: ExpNet: landmark-free, deep, 3d facial expressions. In: International Conference on Automatic Face & Gesture Recognition (FG), pp. 122–129 (2018)

    Google Scholar 

  12. Chaudhuri, B., Vesdapunt, N., Shapiro, L., Wang, B.: Personalized face modeling for improved face reconstruction and motion retargeting. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12350, pp. 142–160. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58558-7_9

    Chapter  Google Scholar 

  13. Chen, A., Chen, Z., Zhang, G., Mitchell, K., Yu, J.: Photo-realistic facial details synthesis from single image. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 9429–9439 (2019)

    Google Scholar 

  14. Chung, J.S., Nagrani, A., Zisserman, A.: VoxCeleb2: deep speaker recognition. In: “INTERSPEECH" (2018)

    Google Scholar 

  15. Cosker, D., Krumhuber, E., Hilton, A.: A FACS valid 3d dynamic action unit database with applications to 3d dynamic morphable facial modeling. In: 2011 International Conference on Computer Vision, pp. 2296–2303 (2011). https://doi.org/10.1109/ICCV.2011.6126510

  16. Dai, H., Pears, N., Smith, W., Duncan, C.: Statistical modeling of craniofacial shape and texture. Int. J. Comput. Vision 128(2), 547–571 (2019). https://doi.org/10.1007/s11263-019-01260-7

    Article  Google Scholar 

  17. Deng, J., Guo, J., Liu, T., Gong, M., Zafeiriou, S.: Sub-center ArcFace: boosting face recognition by large-scale noisy web faces. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12356, pp. 741–757. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58621-8_43

    Chapter  Google Scholar 

  18. Deng, Y., Yang, J., Xu, S., Chen, D., Jia, Y., Tong, X.: Accurate 3D face reconstruction with weakly-supervised learning: From single image to image set. In: Conference on Computer Vision and Pattern Recognition Workshops (CVPR-W) (2019)

    Google Scholar 

  19. Dib, A., Thebault, C., Ahn, J., Gosselin, P., Theobalt, C., Chevallier, L.: Towards high fidelity monocular face reconstruction with rich reflectance using self-supervised learning and ray tracing. In: International Conference on Computer Vision (ICCV), pp. 12819–12829 (2021)

    Google Scholar 

  20. Dou, P., Shah, S.K., Kakadiaris, I.A.: End-to-end 3D face reconstruction with deep neural networks Arch. Computat. Methods Eng 29, 3475–3507 (2017)

    Google Scholar 

  21. Egger, B., et al.: 3D morphable face models - past, present and future. Transa. Graph. 39(5) (2020). https://doi.org/10.1145/3395208

  22. Feng, H., Bolkart, T.: Photometric FLAME fitting (2020). https://github.com/HavenFeng/photometric_optimization

  23. Feng, Y., Feng, H., Black, M.J., Bolkart, T.: Learning an animatable detailed 3D face model from in-the-wild images. Trans. Graph. (Proc. SIGGRAPH) 40(8) (2021)

    Google Scholar 

  24. Feng, Y., Wu, F., Shao, X., Wang, Y., Zhou, X.: Joint 3D face reconstruction and dense alignment with position map regression network. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. LNCS, vol. 11218, pp. 557–574. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01264-9_33

    Chapter  Google Scholar 

  25. Feng, Z., et al.: Evaluation of dense 3D reconstruction from 2D face images in the wild. In: International Conference on Automatic Face & Gesture Recognition (FG), pp. 780–786 (2018). https://doi.org/10.1109/FG.2018.00123

  26. Feng, Z., et al.: Evaluation of dense 3d reconstruction from 2D face images in the wild. CoRR abs/1803.05536 (2018), https://arxiv.org/abs/1803.05536

  27. Garrido, P., Valgaerts, L., Rehmsen, O., Thormaehlen, T., Perez, P., Theobalt, C.: Automatic face reenactment. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4217–4224 (2014)

    Google Scholar 

  28. Garrido, P., et al.: VDub - modifying face video of actors for plausible visual alignment to a dubbed audio track. In: EUROGRAPHICS (EG), pp. 193–204 (2015)

    Google Scholar 

  29. Gecer, B., Ploumpis, S., Kotsia, I., Zafeiriou, S.: GANFIT: generative adversarial network fitting for high fidelity 3D face reconstruction. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019

    Google Scholar 

  30. Gecer, B., Ploumpis, S., Kotsia, I., Zafeiriou, S.P.: Fast-GANFIT: generative adversarial network for high fidelity 3d face reconstruction. IEEE Trans. Pattern Anal. Mach. Intell.. (2021)

    Google Scholar 

  31. Genova, K., Cole, F., Maschinot, A., Sarna, A., Vlasic, D., Freeman, W.T.: Unsupervised training for 3d morphable model regression (2018)

    Google Scholar 

  32. Grassal, P.W., Prinzler, M., Leistner, T., Rother, C., Nießner, M., Thies, J.: Neural Head Avatars from Monocular RGB Videos (2021). https://doi.org/10.48550/ARXIV.2112.01554, https://arxiv.org/abs/2112.01554

  33. Grishchenko, I., Ablavatski, A., Kartynnik, Y., Raveendran, K., Grundmann, M.: Attention Mesh: High-fidelity Face Mesh Prediction in Real-time (2020). https://doi.org/10.48550/ARXIV.2006.10962, https://arxiv.org/abs/2006.10962

  34. Guo, J., Zhu, X., Yang, Y., Yang, F., Lei, Z., Li, S.Z.: Towards Fast, Accurate and Stable 3D Dense Face Alignment (2020). https://doi.org/10.48550/ARXIV.2009.09960, https://arxiv.org/abs/2009.09960

  35. Güler, R.A., Trigeorgis, G., Antonakos, E., Snape, P., Zafeiriou, S., Kokkinos, I.: DenseReg: Fully Convolutional Dense Shape Regression In-the-Wild (2016). https://doi.org/10.48550/ARXIV.1612.01202, https://arxiv.org/abs/1612.01202

  36. Hu, L., et al.: Avatar digitization from a single image for real-time rendering. ACM Trans. Graph. 36(6), 14 (2017). https://doi.org/10.1145/3130800.31310887

  37. Jackson, A.S., Bulat, A., Argyriou, V., Tzimiropoulos, G.: Large Pose 3D Face Reconstruction from a Single Image via Direct Volumetric CNN Regression (2017). https://doi.org/10.48550/ARXIV.1703.07834, https://arxiv.org/abs/1703.07834

  38. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of StyleGAN. In: Proceedings of CVPR (2020)

    Google Scholar 

  39. Kartynnik, Y., Ablavatski, A., Grishchenko, I., Grundmann, M.: Real-time facial surface geometry from monocular video on mobile GPUs (2019)

    Google Scholar 

  40. Kim, H., et al.: Deep video portraits. Trans. Graph. 37(4), 1–14 (2018)

    Article  Google Scholar 

  41. Kim, H., Zollhöfer, M., Tewari, A., Thies, J., Richardt, C., Theobalt, C.: InverseFaceNet: deep monocular inverse face rendering. In: Conference on Computer Vision and Pattern Recognition (CVPR), June 2018

    Google Scholar 

  42. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR abs/1412.6980 (2015)

    Google Scholar 

  43. Koizumi, T., Smith, W.A.P.: Look Ma, No Landmarks – unsupervised, model-based dense face alignment. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. “look ma, no landmarks!" - unsupervised, model-based dense face alignment, vol. 12347, pp. 690–706. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58536-5_41

    Chapter  Google Scholar 

  44. Lattas, A., et al.: AvatarMe: realistically renderable 3D facial reconstruction in-the-wild". In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 760–769 (2020)

    Google Scholar 

  45. Lattas, A., Moschoglou, S., Ploumpis, S., Gecer, B., Ghosh, A., Zafeiriou, S.P.: AvatarMe++: facial shape and BRDF inference with photorealistic rendering-aware GANs. Trans. Pattern Anal. Mach. Intell. (PAMI) (2021)

    Google Scholar 

  46. Li, C., Morel-Forster, A., Vetter, T., Egger, B., Kortylewski, A.: To fit or not to fit: model-based face reconstruction and occlusion segmentation from weak supervision. CoRR abs/2106.09614 (2021), https://arxiv.org/abs/2106.09614

  47. Li, T., Bolkart, T., Black, M.J., Li, H., Romero, J.: Learning a model of facial shape and expression from 4D scans. Trans. Grap. (Proc. SIGGRAPH Asia) 36(6), 194:1–194:17 (2017., https://doi.org/10.1145/3130800.3130813

  48. Liu, S., Li, T., Chen, W., Li, H.: Soft rasterizer: a differentiable renderer for image-based 3D reasoning. In: International Conference on Computer Vision (ICCV), October 2019

    Google Scholar 

  49. Loshchilov, I., Hutter, F.: Fixing weight decay regularization in Adam. CoRR abs/1711.05101 (2017), https://arxiv.org/abs/1711.05101

  50. Morales, A., Piella, G., Sukno, F.M.: Survey on 3D face reconstruction from uncalibrated images (2021)

    Google Scholar 

  51. Nagano, K., et al.:paGAN: real-time avatars using dynamic textures. ACM Trans. Graph. 37(6) (2018). https://doi.org/10.1145/3272127.3275075

  52. Paysan, P., Knothe, R., Amberg, B., Romdhani, S., Vetter, T.: A 3D face model for pose and illumination invariant face recognition. In: International Conference on Advanced Video and Signal Based Surveillance, pp. 296–301 (2009)

    Google Scholar 

  53. Phillips, P., et al.: Overview of the face recognition grand challenge. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 1, pp. 947–954 (2005). https://doi.org/10.1109/CVPR.2005.268

  54. Ramamoorthi, R., Hanrahan, P.: An efficient representation for irradiance environment maps. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2001, pp. 497–500. Association for Computing Machinery, New York, NY, USA (2001). https://doi.org/10.1145/383259.383317

  55. Ravi, N., et al.: Accelerating 3D deep learning with pytorch3d. arXiv:2007.08501 (2020)

  56. Richardson, E., Sela, M., Kimmel, R.: 3D Face Reconstruction by Learning from Synthetic Data (2016). https://doi.org/10.48550/ARXIV.1609.04387, https://arxiv.org/abs/1609.04387

  57. Richardson, E., Sela, M., Or-El, R., Kimmel, R.: Learning detailed face reconstruction from a single image. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2017)

    Google Scholar 

  58. Saito, S., Wei, L., Hu, L., Nagano, K., Li, H.: Photorealistic facial texture inference using deep neural networks (2016)

    Google Scholar 

  59. Sanyal, S., Bolkart, T., Feng, H., Black, M.: Learning to regress 3D face shape and expression from an image without 3d supervision. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  60. Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: a unified embedding for face recognition and clustering. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015. https://doi.org/10.1109/cvpr.2015.7298682

  61. Serengil, S.I., Ozpinar, A.: Hyperextended lightface: a facial attribute analysis framework. In: 2021 International Conference on Engineering and Emerging Technologies (ICEET), pp. 1–4. IEEE (2021). https://doi.org/10.1109/ICEET53442.2021.9659697

  62. Shang, J., et al.: Self-supervised monocular 3D face reconstruction by occlusion-aware multi-view geometry consistency. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12360, pp. 53–70. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58555-6_4

    Chapter  Google Scholar 

  63. Tewari, A., et al.: FML: face model learning from videos. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10812–10822 (2019)

    Google Scholar 

  64. Tewari, A., et al.: Self-supervised multi-level face model learning for monocular reconstruction at over 250 hz. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  65. Tewari, A., et al.: MoFA: model-based deep convolutional face autoencoder for unsupervised monocular reconstruction. In: The IEEE International Conference on Computer Vision (ICCV) (2017)

    Google Scholar 

  66. Thies, J., Zollhöfer, M., Stamminger, M., Theobalt, C., Nießner, M.: Facevr: Real-time gaze-aware facial reenactment in virtual reality. ACM Trans. Graph. 37 (2018)

    Google Scholar 

  67. Thies, J., Elgharib, M., Tewari, A., Theobalt, C., Nießner, M.: Neural voice puppetry: audio-driven facial reenactment. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12361, pp. 716–731. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58517-4_42

    Chapter  Google Scholar 

  68. Thies, J., Zollhöfer, M., Nießner, M.: Deferred neural rendering: image synthesis using neural textures. Trans. Graph. 38(4), 1–12 (2019)

    Article  Google Scholar 

  69. Thies, J., Zollhöfer, M., Nießner, M., Valgaerts, L., Stamminger, M., Theobalt, C.: Real-time expression transfer for facial reenactment. Trans. Graph. 34(6) (2015)

    Google Scholar 

  70. Thies, J., Zollhöfer, M., Stamminger, M., Theobalt, C., Nießner, M.: Face2Face: real-time face capture and reenactment of RGB videos. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2387–2395 (2016)

    Google Scholar 

  71. Thies, J., Zollhöfer, M., Theobalt, C., Stamminger, M., Niessner, M.: Headon: real-time reenactment of human portrait videos. ACM Transa. Graph. 37(4), 1–13 (2018) 10.1145/3197517.3201350, https://dx.doi.org/10.1145/3197517.3201350

  72. Tran, A.T., Hassner, T., Masi, I., Medioni, G.: Regressing robust and discriminative 3D morphable models with a very deep neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1599–1608 (2017)

    Google Scholar 

  73. Tran, A.T., Hassner, T., Masi, I., Paz, E., Nirkin, Y., Medioni, G.: Extreme 3D face reconstruction: seeing through occlusions. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  74. Tran, L., Liu, F., Liu, X.: Towards high-fidelity nonlinear 3D face morphable model. In: In Proceeding of IEEE Computer Vision and Pattern Recognition. Long Beach, CA, June 2019

    Google Scholar 

  75. Tu, X., et al.: Joint 3D face reconstruction and dense face alignment from a single image with 2D-assisted self-supervised learning. arXiv preprint arXiv:1903.09359 (2019)

  76. Wei, H., Liang, S., Wei, Y.: 3D dense face alignment via graph convolution networks (2019)

    Google Scholar 

  77. Weise, T., Bouaziz, S., Li, H., Pauly, M.: Realtime performance-based facial animation. In: Trans. Graph. 30 (2011)

    Google Scholar 

  78. Weise, T., Li, H., Gool, L.J.V., Pauly, M.: Face/Off: live facial puppetry. In: SIGGRAPH/Eurographics Symposium on Computer Animation (SCA), pp. 7–16 (2009)

    Google Scholar 

  79. Yamaguchi, S., et al.: High-fidelity facial reflectance and geometry inference from an unconstrained image. ACM Trans. Graph. 37(4) (2018). https://doi.org/10.1145/3197517.3201364, https://doi.org/10.1145/3197517.3201364

  80. Yang, H., Zhu, H., Wang, Y., Huang, M., Shen, Q., Yang, R., Cao, X.: FaceScape: a large-scale high quality 3d face dataset and detailed riggable 3D face prediction. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020

    Google Scholar 

  81. Yin, L., Wei, X., Sun, Y., Wang, J., Rosato, M.: A 3d facial expression database for facial behavior research. In: 7th International Conference on Automatic Face and Gesture Recognition (FGR06), pp. 211–216 (2006). https://doi.org/10.1109/FGR.2006.6

  82. Zhang, Z., et al.: Multimodal spontaneous emotion corpus for human behavior analysis. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3438–3446 (2016). https://doi.org/10.1109/CVPR.2016.374

  83. Zhu, H., et al.: FacesCape: 3D facial dataset and benchmark for single-view 3D face reconstruction. arXiv preprint arXiv:2111.01082 (2021)

  84. Zhu, X., Lei, Z., Liu, X., Shi, H., Li, S.Z.: Face alignment across large poses: a 3D solution. In: Conference on Computer Vision and Pattern Recognition (CVPR). pp. 146–155. IEEE Computer Society, Los Alamitos, CA, USA, June 2016. https://doi.org/10.1109/CVPR.2016.23, https://doi.ieeecomputersociety.org/10.1109/CVPR.2016.23

  85. Zollhöfer, M., et al.: State of the art on monocular 3D face reconstruction, tracking, and applications. Comput. Graph. Forum (Eurographics State of the Art Reports) 37(2) (2018)

    Google Scholar 

Download references

Acknowledgement

We thank Haiwen Feng for support with NoW and Stirling evaluations, and Chunlu Li for providing FOCUS results. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Wojciech Zielonka.

Disclosure. While TB is part-time employee of Amazon, his research was performed solely at, and funded solely by MPI. JT is supported by Microsoft Research gift funds.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wojciech Zielonka .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 2156 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zielonka, W., Bolkart, T., Thies, J. (2022). Towards Metrical Reconstruction of Human Faces. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13673. Springer, Cham. https://doi.org/10.1007/978-3-031-19778-9_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19778-9_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19777-2

  • Online ISBN: 978-3-031-19778-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics