Advertisement

3D-Reconstruction and Semantic Segmentation of Cystoscopic Images

Conference paper
  • 172 Downloads
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 633)

Abstract

Bladder cancer (BCa) is the fourth most common cancer and the eighth most common cause of cancer-related mortality in men. Although roughly 75% of patients are diagnosed with non-muscle invasive bladder cancer (NMIBC), recurrence rates are high even at a low stage and grade. Transurethral resection (TURB) is required for establishing the pathological diagnosis and clinical staging of patients. In daily clinical practice, however, conventional tumor documentation after TURB lacks accuracy, posing a major limitation for patient follow-up and surveillance. Novel technologies to facilitate data documentation and interpretation processes are imperative to maximize patients’ outcomes. As part of the RaVeNNA-4pi initiative, our contribution is twofold: first, we propose a bladder 3D-reconstruction method using Structure-from-Motion (SfM). Second, we propose deep convolutional neural networks (DCNN) for cystoscopic image segmentation to improve the interpretation of cystoscopic findings and localization of tumors. 3D reconstruction of endoscopic images assists physicians in navigating the bladder and monitoring successive resections. Nevertheless, this process is challenging due to an endoscope’s narrow field of view (FoV), illumination conditions and the bladder’s highly dynamic structure. So far in our project, the SfM approach has been tested on a bladder phantom, demonstrating that the processing sequence permits a 3D reconstruction. Subsequently, we will test our approach on bladder images from patients generated in real-time with a rigid cystoscope. In recent years, deep learning (DL) has enabled significant progress in medical image analysis. Accurate localization of structures such as tumors is of particular interest in processing medical images. In this work, we apply a DCNN for multi-class semantic segmentation of cystoscopic images. Moreover, we introduce a new training dataset for evaluating state-of-the-art DL models on cystoscopic images. Our results show that on average a 0.67 Dice score coefficient (DSC) can be achieved.

Keywords

Cystoscopy Bladder cancer 3D-reconstruction Structure from motion (SfM) Deep learning Neural networks Semantic segmentation 

Notes

Aknowledgments

This study was funded by the German Federal Ministry of Education and Research (grant number 13GW0203A). We also thank Grigor Andreev for annotation of cystoscopic images that were used to build dataset for training of neural networks. Furthermore, we thank Karl Storz GmbH Tuettlingen for their support in building the experimental setup for the phantom bladder recording.

Conflict of Interest

The authors declare that they have no conflict of interest.

References

  1. 1.
    Fradet, Y., Grossman, H.B., Gomella, L., et al.: A comparison of hexaminolevulinate fluorescence cystoscopy and white light cystoscopy for the detection of carcinoma in situ in patients with bladder cancer: a phase III, multicenter study. J. Urol. 178(1), 68–73 (2007).  https://doi.org/10.1016/j.juro.2007.03.028. discussion 73CrossRefGoogle Scholar
  2. 2.
    Soubra, A., Risk, M.C.: Diagnostics techniques in nonmuscle invasive bladder cancer. Indian J. Urol. 31(4), 283–288 (2015).  https://doi.org/10.4103/0970-1591.166449CrossRefGoogle Scholar
  3. 3.
    Rees, C.J., Koo, S.: Artificial intelligence - upping the game in gastrointestinal endoscopy? Nat. Rev. Gastroenterol. Hepatol. 16(10), 584–585 (2019).  https://doi.org/10.1038/s41575-019-0178-yCrossRefGoogle Scholar
  4. 4.
    Babjuk, M., Burger, M., Compérat, E., Gontero, P., Mostafid, A.H., Palou, J., van Rhijn, B.W.G., Rouprêt, M., Shariat, S.F., Sylvester, R., Zigeuner, R.: EAU Guidelines edn. presented at the EAU Annual Congress Barcelona (2019)Google Scholar
  5. 5.
    Lurie, K.L., Angst, R., Zlatev, D.V., et al.: 3D reconstruction of cystoscopy videos for comprehensive bladder records. Biomed. Opt. Express 8(4), 2106–2123 (2017).  https://doi.org/10.1364/BOE.8.002106CrossRefGoogle Scholar
  6. 6.
    Péntek, Q., Hein, S., Miernik, A., et al.: Image-based 3D surface approximation of the bladder using structure-from-motion for enhanced cystoscopy based on phantom data. Biomed. Tech. (Berl) 63(4), 461–466 (2018).  https://doi.org/10.1515/bmt-2016-0185CrossRefGoogle Scholar
  7. 7.
    Widya, A.R., Monno, Y., Imahori, K. et al.: 3D Reconstruction of whole stomach from endoscope video using structure-from-motion. CoRR abs/1905.12988 (2019)Google Scholar
  8. 8.
    Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).  https://doi.org/10.1109/34.888718CrossRefGoogle Scholar
  9. 9.
    Péntek, Q.: 3D-Rekonstruktion menschlicher Harnblasen anhand Videoaufnahmen eines monokularen Endoskops (2016)Google Scholar
  10. 10.
    Hanna, S.: Structure from Motion (SfM) (2019). http://gsp.humboldt.edu/OLM/Courses/GSP_216_Online/lesson8-2/SfM.html. Accessed 12 Nov 2019
  11. 11.
    Nyimbili, P., Demirel, H., Seker, D., Erden, T.: Structure from motion (SfM) - approaches and applications (2016)Google Scholar
  12. 12.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004).  https://doi.org/10.1023/B:VISI.0000029664.99615.94CrossRefGoogle Scholar
  13. 13.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004)CrossRefGoogle Scholar
  14. 14.
    Triggs, B., Mclauchlan, P., Hartley, R. et al.: Bundle adjustment – a modern synthesis. In: International Workshop on Vision Algorithms, pp. 298–372, September 2000Google Scholar
  15. 15.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional Networks For Biomedical Image Segmentation. Springer, Cham (2015)Google Scholar
  16. 16.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation (2015). https://arxiv.org/pdf/1411.4038
  17. 17.
    Ma, X., Hadjiiski, L., Wei, J. et al.: 2D and 3D bladder segmentation using U-Net-based deep-learning. In: International Society for Optics and Photonics, p. 109500Y (2019)Google Scholar
  18. 18.
    Dolz, J., Xu, X., Rony, J. et al.: Multi-region segmentation of bladder cancer structures in MRI with progressive dilated convolutional networks (2018). https://arxiv.org/pdf/1805.10720
  19. 19.
    Kouznetsova, V.L., Kim, E., Romm, E.L., et al.: Recognition of early and late stages of bladder cancer using metabolites and machine learning. Metabolomics 15(7), 94 (2019).  https://doi.org/10.1007/s11306-019-1555-9CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.Department of Sustainable Systems Engineering INATECHUniversity of FreiburgFreiburgGermany
  2. 2.Fraunhofer Institute for Physical Measurement Techniques IPMFreiburgGermany
  3. 3.Department of Urology, Faculty of MedicineUniversity of Freiburg – Medical CentreFreiburgGermany

Personalised recommendations