Advertisement

Cross-Domain Conditional Generative Adversarial Networks for Stereoscopic Hyperrealism in Surgical Training

  • Sandy EngelhardtEmail author
  • Lalith Sharan
  • Matthias Karck
  • Raffaele De Simone
  • Ivo Wolf
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11768)

Abstract

Phantoms for surgical training are able to mimic cutting and suturing properties and patient-individual shape of organs, but lack a realistic visual appearance that captures the heterogeneity of surgical scenes. In order to overcome this in endoscopic approaches, hyperrealistic concepts have been proposed to be used in an augmented reality-setting, which are based on deep image-to-image transformation methods. Such concepts are able to generate realistic representations of phantoms learned from real intraoperative endoscopic sequences. Conditioned on frames from the surgical training process, the learned models are able to generate impressive results by transforming unrealistic parts of the image (e.g. the uniform phantom texture is replaced by the more heterogeneous texture of the tissue). Image-to-image synthesis usually learns a mapping \(G:X~\rightarrow ~Y\) such that the distribution of images from G(X) is indistinguishable from the distribution Y. However, it does not necessarily force the generated images to be consistent and without artifacts. In the endoscopic image domain this can affect depth cues and stereo consistency of a stereo image pair, which ultimately impairs surgical vision. We propose a cross-domain conditional generative adversarial network approach (GAN) that aims to generate more consistent stereo pairs. The results show substantial improvements in depth perception and realism evaluated by 3 domain experts and 3 medical students on a 3D monitor over the baseline method. In 84 of 90 instances our proposed method was preferred or rated equal to the baseline.

Keywords

Generative adversarial networks Minimally-invasive surgical training Augmented reality Mitral valve simulator Laparoscopy 

Notes

Acknowlegments

The research was supported by the German Research Foundation DFG project 398787259, DE 2131/2-1 and EN 1197/2-1. The GPU was donated by Nvidia small scale grant.

References

  1. 1.
    Engelhardt, S., De Simone, R., Full, P.M., Karck, M., Wolf, I.: Improving surgical training phantoms by hyperrealism: deep unpaired image-to-image translation from real surgeries. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 747–755. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00928-1_84CrossRefGoogle Scholar
  2. 2.
    Milgram, P., Kishino, F.: A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. 77(12), 1321–1329 (1994)Google Scholar
  3. 3.
    Luengo, I., Flouty, E., Giataganas, P., Wisanuvej, P., Nehme, J., Stoyanov, D.: Surreal: enhancing surgical simulation realism using style transfer. In: British Machine Vision Conference 2018, BMVC 2018, Northumbria University, Newcastle, UK, 3–6 September 2018, p. 116 (2018)Google Scholar
  4. 4.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: IEEE International Conference on Computer Vision (ICCV) 2017, pp. 2242–2251 (2017)Google Scholar
  5. 5.
    Chen, D., Yuan, L., Liao, J., Yu, N., Hua, G.: Stereoscopic neural style transfer. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6654–6663, June 2018Google Scholar
  6. 6.
    Mirza, M., Osindero, S.: Conditional Generative Adversarial Nets. arXiv:1411.1784, November 2014
  7. 7.
    Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. arXiv:1611.07004, November 2016
  8. 8.
    Yi, Z., Zhang, H., Tan, P., Gong, M.: DualGAN: unsupervised dual learning for image-to-image translation. In: The IEEE International Conference on Computer Vision (ICCV), pp. 2868–2876, October 2017Google Scholar
  9. 9.
    Engelhardt, S., Sauerzapf, S., Preim, B., Karck, M., Wolf, I., De Simone, R.: Flexible and comprehensive patient-specific mitral valve silicone models with chordae tendinae made from 3D-printable molds. Int. J. Comput. Assist. Radiol. Surg. (IPCAI Spec. Issue) 14(7), 1177–1186 (2019)CrossRefGoogle Scholar
  10. 10.
    Engelhardt, S., Sauerzapf, S., Brčić, A., Karck, M., Wolf, I., De Simone, R.: Replicated mitral valve models from real patients offer training opportunities for minimally invasive mitral valve repair. Interact. Cardiovasc. Thorac. Surg. 29, 43–50 (2019)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Faculty of Computer ScienceMannheim University of Applied SciencesMannheimGermany
  2. 2.Department of Cardiac SurgeryHeidelberg University HospitalHeidelbergGermany
  3. 3.Department of Simulation and Graphics and Research Campus STIMULATEMagdeburg UniversityMagdeburgGermany

Personalised recommendations