Skip to main content

From Skulls to Faces: A Deep Generative Framework for Realistic 3D Craniofacial Reconstruction

  • Conference paper
  • First Online:
MultiMedia Modeling (MMM 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14554))

Included in the following conference series:

  • 891 Accesses

Abstract

The shape of the human face is largely determined by the underlying skull morphology. Craniofacial reconstruction (CfR), or the process of reconstructing the face from the skull, is a challenging task with applications in forensic science, criminal investigation, and archaeology. Traditional craniofacial reconstruction methods suffer from subjective interpretation and simple low-dimensional learning approaches, resulting in low reconstruction accuracy and realism. In this paper, we present a deep learning-based framework for CfR based on conditional generative adversarial networks. Unlike conventional methods that adopt 3D representations directly, we employ 2D depth maps to represent faces and skulls as the model’s input and output. It can provide enough face geometric information and may mitigate the potential risk of dimensionality issues. Our framework is capable of modeling both local and global details of facial appearance through a novel discriminator structure that leverages multi-receptive field features in one output, thus generating realistic and individualized faces from skulls. Furthermore, to explore the impact of conditional information such as age and gender on facial appearance, we develop a conditional CfR paradigm that incorporates an improved residual block structure with conditional information modulation and a conditional information reconstruction loss function. Extensive experiments and comparisons are conducted to demonstrated the effectiveness and superiority of our method for CfR.

This work is supported by the National Natural Science Foundation of China (No. 82202079) and Natural Science Foundation of Sichuan Province (No. 2022NSFSC1403).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Claes, P., et al.: Computerized craniofacial reconstruction: conceptual framework and review. Forensic Sci. Int. 201(1–3), 138–145 (2010)

    Article  Google Scholar 

  2. Masi, I., et al.: Deep face recognition: a survey. In: 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). IEEE (2018)

    Google Scholar 

  3. A-masiri, P., Kerdvibulvech, C.: Anime face recognition to create awareness. Int. J. Inf. Technol. 15, 3507–3512 (2023)

    Google Scholar 

  4. Hörmann, S.: Robust Face Recognition Under Adverse Conditions: Technische Universität München (2023)

    Google Scholar 

  5. Wilkinson, C.: Forensic Facial Reconstruction: Cambridge University Press (2004)

    Google Scholar 

  6. Vandermeulen, D., et al.: Computerized craniofacial reconstruction using CT-derived implicit surface representations. Forensic Sci. Int. 159, S164–S174 (2006)

    Article  Google Scholar 

  7. Deng, Q., et al.: A novel skull registration based on global and local deformations for craniofacial reconstruction. Forensic Sci. Int. 208(1–3), 95–102 (2011)

    Article  Google Scholar 

  8. Berar, M., et al.: Craniofacial reconstruction as a prediction problem using a Latent Root Regression model. Forensic Sci. Int. 210(1–3), 228–236 (2011)

    Article  Google Scholar 

  9. Creswell, A., et al.: Generative adversarial networks: an overview. IEEE Signal Process. Mag. 35(1), 53–65 (2018)

    Article  Google Scholar 

  10. Eigen, D., Puhrsch, C., Fergus, R.: Depth map prediction from a single image using a multi-scale deep network. In: Advances in Neural Information Processing Systems, vol. 27 (2014)

    Google Scholar 

  11. Zhang, C., et al.: A study on overfitting in deep reinforcement learning. arXiv preprint arXiv:1804.06893 (2018)

  12. Zhang, N., et al.: An end-to-end conditional generative adversarial network based on depth map for 3D craniofacial reconstruction. In: Proceedings of the 30th ACM International Conference on Multimedia (2022)

    Google Scholar 

  13. Radford, A., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning. PMLR (2021)

    Google Scholar 

  14. Tilotta, F., et al.: Construction and analysis of a head CT-scan database for craniofacial reconstruction. Forensic Sci. Int. 191(1–3), 112-e1 (2009)

    Google Scholar 

  15. De Greef, S., Guy, W.: Three-dimensional cranio-facial reconstruction in forensic identification. J. Forensic Sci. 50(1), JFS2004117 (2005)

    Article  Google Scholar 

  16. Prieels, F., Hirsch, S., Hering, P.: Holographic topometry for a dense visualization of soft tissue for facial reconstruction. Forensic Sci. Med. Pathol. 5, 11–16 (2009)

    Article  Google Scholar 

  17. Claes, P., et al.: Craniofacial reconstruction using a combined statistical model of face shape and soft tissue depths: methodology and validation. Forensic Sci. Int. 159, S147–S158 (2006)

    Article  Google Scholar 

  18. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, vol. 25 (2012)

    Google Scholar 

  19. Isola, P., et al.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  20. Zhu, J.-Y., et al.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision (2017)

    Google Scholar 

  21. Li, Y., et al.: CR-GAN: automatic craniofacial reconstruction for personal identification. Pattern Recogn. 124, 108400 (2022)

    Article  Google Scholar 

  22. He, K., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)

    Google Scholar 

  23. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43

    Chapter  Google Scholar 

  24. Yao, L., et al.: Filip: fine-grained interactive language-image pre-training. arXiv preprint arXiv:2111.07783 (2021)

  25. Shen, S., et al.: How much can clip benefit vision-and-language tasks? arXiv preprint arXiv:2107.06383 (2021)

  26. Chen, L.-C., et al.: DeepLab: semantic image segmentation with deep convolutional nets, Atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2017)

    Article  Google Scholar 

  27. Ulyanov, D., Vedaldi, A., Lempitsky, V.: Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022 (2016)

  28. Heusel, M., et al.: GANs trained by a two time-scale update rule converge to a local nash equilibrium. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Jiancheng Lv or Yuan Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pan, Y. et al. (2024). From Skulls to Faces: A Deep Generative Framework for Realistic 3D Craniofacial Reconstruction. In: Rudinac, S., et al. MultiMedia Modeling. MMM 2024. Lecture Notes in Computer Science, vol 14554. Springer, Cham. https://doi.org/10.1007/978-3-031-53305-1_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-53305-1_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-53304-4

  • Online ISBN: 978-3-031-53305-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics