Advertisement

Generating Large Labeled Data Sets for Laparoscopic Image Processing Tasks Using Unpaired Image-to-Image Translation

  • Micha PfeifferEmail author
  • Isabel Funke
  • Maria R. Robu
  • Sebastian Bodenstedt
  • Leon Strenger
  • Sandy Engelhardt
  • Tobias Roß
  • Matthew J. Clarkson
  • Kurinchi Gurusamy
  • Brian R. Davidson
  • Lena Maier-Hein
  • Carina Riediger
  • Thilo Welsch
  • Jürgen Weitz
  • Stefanie Speidel
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11768)

Abstract

In the medical domain, the lack of large training data sets and benchmarks is often a limiting factor for training deep neural networks. In contrast to expensive manual labeling, computer simulations can generate large and fully labeled data sets with a minimum of manual effort. However, models that are trained on simulated data usually do not translate well to real scenarios. To bridge the domain gap between simulated and real laparoscopic images, we exploit recent advances in unpaired image-to-image translation. We extend an image-to-image translation method to generate a diverse multitude of realistically looking synthetic images based on images from a simple laparoscopy simulation. By incorporating means to ensure that the image content is preserved during the translation process, we ensure that the labels given for the simulated images remain valid for their realistically looking translations. This lets us generate a large, fully labeled synthetic data set. We show that this data set can be used to train models for the task of liver segmentation in laparoscopic images. We achieve median dice scores of up to 0.89 in some patients without manually labeling a single laparoscopic image and show that using our synthetic data to pre-train models can greatly improve their performance. The synthetic data set is made publicly available, fully labeled with segmentation maps, depth maps, normal maps, and positions of tools and camera (http://opencas.dkfz.de/image2image).

Keywords

Unsupervised GAN Image translation Segmentation 

Supplementary material

490279_1_En_14_MOESM1_ESM.pdf (6 mb)
Supplementary material 1 (pdf 6129 KB)

References

  1. 1.
    Bujwid, S., Martí, M., Azizpour, H., Pieropan, A.: GANtruth - an unpaired image-to-image translation method for driving scenarios (2018)Google Scholar
  2. 2.
    Chu, C., Zhmoginov, A., Sandler, M.: CycleGAN, a Master of Steganography. ArXiv abs/1712.02950 (2017)Google Scholar
  3. 3.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR 2009 (2009)Google Scholar
  4. 4.
    Gibson, E., et al.: Deep residual networks for automatic segmentation of laparoscopic videos of the liver (2017)Google Scholar
  5. 5.
    Huang, S.-W., Lin, C.-T., Chen, S.-P., Wu, Y.-Y., Hsu, P.-H., Lai, S.-H.: AugGAN: cross domain adaptation with GAN-based data augmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11213, pp. 731–744. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01240-3_44CrossRefGoogle Scholar
  6. 6.
    Huang, X., Liu, M.Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: The European Conference on Computer Vision (ECCV) (2018)CrossRefGoogle Scholar
  7. 7.
    Iglovikov, V.I., Shvets, A.A.: TernausNet: U-Net with VGG11 Encoder Pre-Trained on ImageNet for Image Segmentation. CoRR abs/1801.05746 (2018)Google Scholar
  8. 8.
    Lee, H.Y., Tseng, H.Y., Huang, J.B., Singh, M., Yang, M.H.: Diverse image-to-image translation via disentangled representations. In: The European Conference on Computer Vision (ECCV) (2018)Google Scholar
  9. 9.
    Lee, K.H., Ros, G., Li, J., Gaidon, A.: SPIGAN: privileged adversarial learning from simulation. In: International Conference on Learning Representations (2019)Google Scholar
  10. 10.
    Maier-Hein, L., et al.: Surgical data science for next-generation interventions. Nat. Biomed. Eng. 1(9), 691 (2017)CrossRefGoogle Scholar
  11. 11.
    Twinanda, A., Shehata, S., Mutter, D., Marescaux, J., De Mathelin, M., Padoy, N.: EndoNet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans. Med. Imaging 36, 86–97 (2016)CrossRefGoogle Scholar
  12. 12.
    Wang, Z., Simoncelli, E.P., Bovik, A.C.: Multiscale structural similarity for image quality assessment. In: The Thrity-Seventh Asilomar Conference on Signals, Systems Computers, vol. 2, pp. 1398–1402 (2003)Google Scholar
  13. 13.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: 2017 IEEE International Conference on Computer Vision (ICCV) (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Micha Pfeiffer
    • 1
    Email author
  • Isabel Funke
    • 1
  • Maria R. Robu
    • 2
    • 3
  • Sebastian Bodenstedt
    • 1
  • Leon Strenger
    • 1
  • Sandy Engelhardt
    • 4
  • Tobias Roß
    • 5
  • Matthew J. Clarkson
    • 2
    • 3
  • Kurinchi Gurusamy
    • 6
  • Brian R. Davidson
    • 6
  • Lena Maier-Hein
    • 5
  • Carina Riediger
    • 7
  • Thilo Welsch
    • 7
  • Jürgen Weitz
    • 7
    • 8
  • Stefanie Speidel
    • 1
    • 8
  1. 1.Division of Translational Surgical OncologyNational Center for Tumor DiseasesDresdenGermany
  2. 2.Wellcome/EPSRC Centre for Interventional and Surgical SciencesUniversity College LondonLondonUK
  3. 3.Centre for Medical Image ComputingUniversity College LondonLondonUK
  4. 4.Faculty of Computer ScienceMannheim University of Applied SciencesMannheimGermany
  5. 5.Division of Computer Assisted Medical Interventions (CAMI)German Cancer Research Center (DKFZ)HeidelbergGermany
  6. 6.Division of Surgery and Interventional ScienceUniversity College LondonLondonUK
  7. 7.Department for Visceral, Thoracic and Vascular SurgeryUniversity Hospital DresdenDresdenGermany
  8. 8.Centre for Tactile Internet with Human-in-the-Loop (CeTI)TU DresdenDresdenGermany

Personalised recommendations