Advertisement

MeshSNet: Deep Multi-scale Mesh Feature Learning for End-to-End Tooth Labeling on 3D Dental Surfaces

  • Chunfeng Lian
  • Li WangEmail author
  • Tai-Hsien Wu
  • Mingxia LiuEmail author
  • Francisca Durán
  • Ching-Chang Ko
  • Dinggang ShenEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11769)

Abstract

Accurate tooth labeling on 3D dental surfaces is a vital task in computer-aided orthodontic treatment planning. Existing automated or semi-automated methods usually require human interactions, which is time-consuming. Also, they typically use simple geometric properties as the criteria for segmentation, which cannot well handle the high variation of tooth appearance across different patients. Recently, several pioneering deep neural networks (e.g., PointNet) have been proposed in the computer vision and computer graphics communities to efficiently segment 3D shapes in an end-to-end manner. However, these methods do not perform well in our specific task of tooth labeling, especially considering that they cannot explicitly model fine-grained local geometric context of teeth (although only a small portion of dental surfaces but with different shapes and appearances). In this paper, we propose a specific deep neural network (called MeshSNet) for end-to-end tooth segmentation on 3D dental surfaces captured by advanced intraoral scanners. Using directly raw mesh data as input, our MeshSNet adopts novel graph-constrained learning modules to hierarchically extract multi-scale contextual features, and then densely integrates local-to-global geometric features to comprehensively characterize mesh cells for the segmentation task. We evaluated our proposed method on an in-house clinic dataset via 3-fold cross-validation. The experimental results demonstrate the superior performance of our MeshSNet method, compared with the state-of-the-art deep learning methods for 3D shape segmentation.

References

  1. 1.
    Feng, Y., et al.: MeshNet: mesh neural network for 3D shape representation. In: AAAI (2019)CrossRefGoogle Scholar
  2. 2.
    Huang, Q., et al.: Recurrent slice networks for 3D segmentation of point clouds. In: CVPR, pp. 2626–2635 (2018)Google Scholar
  3. 3.
    Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: ICLR (2017)Google Scholar
  4. 4.
    Kondo, T., et al.: Tooth segmentation of dental study models using range images. IEEE Trans. Med. Imaging 23(3), 350–362 (2004)CrossRefGoogle Scholar
  5. 5.
    Martin, C.B., et al.: Orthodontic scanners: what’s available? J. Orthod. 42(2), 136–143 (2015)CrossRefGoogle Scholar
  6. 6.
    Qi, C.R., et al.: PointNet++: deep hierarchical feature learning on point sets in a metric space. In: NeurIPS, pp. 5099–5108 (2017)Google Scholar
  7. 7.
    Qi, C.R., et al.: PointNet: deep learning on point sets for 3D classification and segmentation. In: CVPR, pp. 652–660 (2017)Google Scholar
  8. 8.
    Reddi, S.J., et al.: On the convergence of adam and beyond. In: ICLR (2018)Google Scholar
  9. 9.
    Roberta, T., et al.: Study of the potential cytotoxicity of dental impression materials. Toxicol. Vitro 17(5–6), 657–662 (2003)CrossRefGoogle Scholar
  10. 10.
    Sudre, C.H., et al.: Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In: DLMIA, pp. 240–248 (2017)Google Scholar
  11. 11.
    Wu, W., et al.: PointConv: deep convolutional networks on 3D point clouds. In: CVPR, pp. 9621–9630 (2019)Google Scholar
  12. 12.
    Xu, X., et al.: 3D tooth segmentation and labeling using deep convolutional neural networks. IEEE Trans. Vis. Comput. Graph. 25, 2336–2348 (2018)CrossRefGoogle Scholar
  13. 13.
    Zheng, S., et al.: Conditional random fields as recurrent neural networks. In: CVPR, pp. 1529–1537 (2015)Google Scholar
  14. 14.
    Zou, B., et al.: Interactive tooth partition of dental mesh base on tooth-target harmonic field. Comput. Biol. Med. 56, 132–144 (2015)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Radiology and BRICUniversity of North Carolina at Chapel HillChapel HillUSA
  2. 2.Department of Oral and Craniofacial Health SciencesUniversity of North Carolina at Chapel HillChapel HillUSA
  3. 3.Department of OrthodonticsUniversity of North Carolina at Chapel HillChapel HillUSA

Personalised recommendations