Advertisement

Extending pretrained segmentation networks with additional anatomical structures

  • Firat OzdemirEmail author
  • Orcun Goksel
Original Article
  • 77 Downloads

Abstract

Purpose

For comprehensive surgical planning with sophisticated patient-specific models, all relevant anatomical structures need to be segmented. This could be achieved using deep neural networks given sufficiently many annotated samples; however, datasets of multiple annotated structures are often unavailable in practice and costly to procure. Therefore, being able to build segmentation models with datasets from different studies and centers in an incremental fashion is highly desirable.

Methods

We propose a class-incremental framework for extending a deep segmentation network to new anatomical structures using a minimal incremental annotation set. Through distilling knowledge from the current network state, we overcome the need for a full retraining.

Results

We evaluate our methods on 100 MR volumes from SKI10 challenge with varying incremental annotation ratios. For 50% incremental annotations, our proposed method suffers less than 1% Dice score loss in retaining old-class performance, as opposed to 25% loss of conventional finetuning. Our framework inherently exploits transferable knowledge from previously trained structures to incremental tasks, demonstrated by results superior even to non-incremental training: In a single volume one-shot incremental learning setting, our method outperforms vanilla network performance by>11% in Dice.

Conclusions

With the presented method, new anatomical structures can be learned while retaining performance for older structures, without a major increase in complexity and memory footprint, hence suitable for lifelong class-incremental learning. By leveraging information from older examples, a fraction of annotations can be sufficient for incrementally building comprehensive segmentation models. With our meta-method, a deep segmentation network is extended with only a minor addition per structure, thus can be applicable also for future network architectures.

Keywords

Patient-specific modeling Segmentation Deep learning Class-incremental learning Lifelong learning Intervention planning 

Notes

Acknowledgements

This work was funded by the Swiss National Science Foundation (Grant No. 179116) and a Highly Specialized Medicine grant (HSM2) of the Canton of Zurich.

Compliance with ethical standards

Conflict of interest

All authors declare no conflict of interest.

Research involving human participants

All procedures performed in studies involving human participants were in accordance with the ethical standards of the provincial ethics committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.

Informed consent

was obtained from all individual participants included in the study.

Supplementary material

11548_2019_1984_MOESM1_ESM.pdf (704 kb)
Supplementary material 1 (pdf 704 KB)

References

  1. 1.
    Sloan M, Sheth NP (2018) Projected volume of primary and revision total joint arthroplasty in the united states, 2030–2060. Tech. Rep. 16, annual meeting of the American Academy of Orthopaedic Surgeons, New Orleans, Louisiana, 03Google Scholar
  2. 2.
    Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: MICCAI. pp 234–241Google Scholar
  3. 3.
    Baumgartner CF, Koch LM, Pollefeys M, Konukoglu E (2018) An exploration of 2D and 3D deep learning techniques for cardiac MR image segmentation. In: Statistical atlases and computational models of the heart. pp 111–119Google Scholar
  4. 4.
    Yang L, Zhang Y, Chen J, Zhang S, Chen DZ (2017) Suggestive annotation: a deep active learning framework for biomedical image segmentation. In: MICCAI. pp 399–407Google Scholar
  5. 5.
    Ozdemir F, Peng Z, Tanner C, Fuernstahl P, Goksel O (2018) Active learning for segmentation by optimizing content information for maximal entropy. In: MICCAI DLMIA. pp 183–191Google Scholar
  6. 6.
    McCloskey M, Cohen NJ (1989) Catastrophic interference in connectionist networks: the sequential learning problem. vol 24, pp 109–165Google Scholar
  7. 7.
    Rebuffi S, Kolesnikov A, Sperl G, Lampert CH (2017) iCaRL: Incremental classifier and representation learning. In: IEEE conference on computer vision pattern recog (CVPR). pp 5533–5542Google Scholar
  8. 8.
    Castro FM, Marín-Jiménez MJ, Mata NG, Schmid C, Alahari K (2018) End-to-end incremental learning. In: ECCVGoogle Scholar
  9. 9.
    Ozdemir F, Fuernstahl P, Goksel O (2018) Learn the new, keep the old: Extending pretrained models with new anatomy and images. In: MICCAI. pp 361–369Google Scholar
  10. 10.
    Hinton G, Vinyals O, Dean J (2015) Distilling the knowledge in a neural network. In: NIPS deep learning and representation learning workshopGoogle Scholar
  11. 11.
    Li Z, Hoiem D (2016) Learning without forgetting. In: ECCV. pp 614–629Google Scholar
  12. 12.
    Hochbaum DS (1997) Approximation algorithms for NP-hard problems. PWS Publishing Co., Boston, pp 94–143Google Scholar
  13. 13.
    Gal Y, Ghahramani Z (2016) Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In: International conference on machine learning (ICML). pp 1050–1059Google Scholar
  14. 14.
    Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: A simple way to prevent neural networks from overfitting. J Mach Learn Res 15:1929–1958Google Scholar
  15. 15.
    Heimann T, Morrison BJ, Styner MA, Niethammer M, Warfield S (2010) Segmentation of knee images: a grand challenge. In MICCAI workshop on medical image analysis for the clinic. pp 207–214Google Scholar

Copyright information

© CARS 2019

Authors and Affiliations

  1. 1.Computer-Assisted Applications in Medicine (CAiM)ETH ZurichZurichSwitzerland

Personalised recommendations