Endoscopic scene labelling and augmentation using intraoperative pulsatile motion and colour appearance cues with preoperative anatomical priors

  • Masoud S. Nosrati
  • Alborz Amir-Khalili
  • Jean-Marc Peyrat
  • Julien Abinahed
  • Osama Al-Alao
  • Abdulla Al-Ansari
  • Rafeef Abugharbieh
  • Ghassan Hamarneh
Original Article

Abstract

Purpose

Despite great advances in medical image segmentation, the accurate and automatic segmentation of endoscopic scenes remains a challenging problem. Two important aspects have to be considered in segmenting an endoscopic scene: (1) noise and clutter due to light reflection and smoke from cutting tissue, and (2) structure occlusion (e.g. vessels occluded by fat, or endophytic tumours occluded by healthy kidney tissue).

Methods

In this paper, we propose a variational technique to augment a surgeon’s endoscopic view by segmenting visible as well as occluded structures in the intraoperative endoscopic view. Our method estimates the 3D pose and deformation of anatomical structures segmented from 3D preoperative data in order to align to and segment corresponding structures in 2D intraoperative endoscopic views. Our preoperative to intraoperative alignment is driven by, first, spatio-temporal, signal processing based vessel pulsation cues and, second, machine learning based analysis of colour and textural visual cues. To our knowledge, this is the first work that utilizes vascular pulsation cues for guiding preoperative to intraoperative registration. In addition, we incorporate a tissue-specific (i.e. heterogeneous) physically based deformation model into our framework to cope with the non-rigid deformation of structures that occurs during the intervention.

Results

We validated the utility of our technique on fifteen challenging clinical cases with 45 % improvements in accuracy compared to the state-of-the-art method.

Conclusions

A new technique for localizing both visible and occluded structures in an endoscopic view was proposed and tested. This method leverages both preoperative data, as a source of patient-specific prior knowledge, as well as vasculature pulsation and endoscopic visual cues in order to accurately segment the highly noisy and cluttered environment of an endoscopic video. Our results on in vivo clinical cases of partial nephrectomy illustrate the potential of the proposed framework for augmented reality applications in minimally invasive surgeries.

Keywords

Robotic surgery Partial nephrectomy Image-guided surgery Segmentation 3D pose estimation Endoscopy  Patient-specific model Occluded vessels Kidney 

Notes

Acknowledgments

This publication was made possible by NPRP Grant #4-161-2-056 from the Qatar National Research Fund (a member of the Qatar Foundation). The statements made herein are solely the responsibility of the authors.

Compliance with ethical standards

Conflicts of interest

The authors declare that they have no conflict of interest.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed consent

This articles does not contain patient information.

References

  1. 1.
    Agudo A, Agapito L, Calvo B, Montiel J (2014) Good vibrations: a modal analysis approach for sequential non-rigid structure from motion. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 1558–1565Google Scholar
  2. 2.
    Amir-Khalili A, Hamarneh G, Peyrat JM, Abinahed J, Al-Alao O, Al-Ansari A, Abugharbieh R (2015) Automatic segmentation of occluded vasculature via pulsatile motion analysis in endoscopic robot-assisted partial nephrectomy video. Med Image Anal 25(1):103–110Google Scholar
  3. 3.
    Amir-Khalili A, Peyrat JM, Abinahed J, Al-Alao O, Al-Ansari A, Hamarneh G, Abugharbieh R (2014) Auto localization and segmentation of occluded vessels in robot-assisted partial nephrectomy. In: Medical image computing and computer-assisted intervention (MICCAI), pp 407–414Google Scholar
  4. 4.
    Andrews S, Hamarneh G (2015) The generalized log-ratio transformation: learning shape and adjacency priors for simultaneous thigh muscle segmentation. IEEE Trans Med Imaging 34(9):1773–1787Google Scholar
  5. 5.
    Andrews S, McIntosh C, Hamarneh G (2011) Convex multi-region probabilistic segmentation with shape prior in the isometric log-ratio transformation space. In: IEEE international conference on computer vision (ICCV). IEEE, pp 2096–2103Google Scholar
  6. 6.
    Brown E, Chan T, Bresson X (2009) Convex formulation and exact global solutions for multi-phase piecewise constant Mumford–Shah image segmentation. UCLA CAM report, pp 09–66Google Scholar
  7. 7.
    Chan TF, Vese L et al (2001) Active contours without edges. IEEE Trans Image Process 10(2):266–277CrossRefPubMedGoogle Scholar
  8. 8.
    Crane NJ et al (2010) Visual enhancement of laparoscopic partial nephrectomy with 3-charge coupled device camera: assessing intraoperative tissue perfusion and vascular anatomy by visible hemoglobin spectral response. J Urol 184(4):1279–1285CrossRefPubMedGoogle Scholar
  9. 9.
    Delong A, Boykov Y (2009) Globally optimal segmentation of multi-region objects. In: ieee international conference on computer vision (IEEE ICCV), pp 285–292Google Scholar
  10. 10.
    Escudier B, Kataja V et al (2010) Renal cell carcinoma: ESMO clinical practice guidelines for diagnosis, treatment and follow-up. Ann Oncol 21(Suppl 5):v137–v139CrossRefPubMedGoogle Scholar
  11. 11.
    Estépar RSJ, Westin CF, Vosburgh KG (2009) Towards real time 2D to 3D registration for ultrasound-guided endoscopic and laparoscopic procedures. Int J Comput Assist Radiol Surg 4(6):549–560CrossRefPubMedCentralGoogle Scholar
  12. 12.
    Figueiredo IN, Figueiredo PN, Stadler G, Ghattas O, Araujo A (2010) Variational image segmentation for endoscopic human colonic aberrant crypt foci. IEEE Trans Med Imaging 29(4):998–1011CrossRefPubMedGoogle Scholar
  13. 13.
    Figueiredo IN, Moreno JC, Prasath VBS, Figueiredo PN (2012) A segmentation model and application to endoscopic images. In: Image analysis and recognition. Springer, pp 164–171Google Scholar
  14. 14.
    Gill IS, Desai MM, Kaouk JH, Meraney AM, Murphy DP, Sung GT, Novick AC (2002) Laparoscopic partial nephrectomy for renal tumor: duplicating open surgical techniques. J Urol 167(2):469–476CrossRefPubMedGoogle Scholar
  15. 15.
    Gill IS, Kavoussi LR, Lane BR, Blute ML, Babineau D, Colombo JR Jr, Frank I, Permpongkosol S, Weight CJ, Kaouk JH et al (2007) Comparison of 1,800 laparoscopic and open partial nephrectomies for single renal tumors. J Urol 178(1):41–46CrossRefPubMedGoogle Scholar
  16. 16.
    Gill S, Abolmaesumi P, Vikal S, Mousavi P, Fichtinger G (2008) Intraoperative prostate tracking with slice-to-volume registration in MR. In: International conference of the society for medical innovation and technology, pp 154–158Google Scholar
  17. 17.
    Hernes N, Toril A, Lindseth F, Selbekk T, Wollf A, Solberg OV, Harg E, Rygh OM, Tangen GA, Rasmussen I et al (2006) Computer-assisted 3D ultrasound-guided neurosurgery: technological contributions, including multimodal registration and advanced display, demonstrating future perspectives. Int J Med Robot Comput Assist Surg 2(1):45–59CrossRefGoogle Scholar
  18. 18.
    Hummel J, Figl M, Bax M, Bergmann H, Birkfellner W (2008) 2D/3D registration of endoscopic ultrasound to CT volume data. Phys Med Biol 53(16):4303CrossRefPubMedGoogle Scholar
  19. 19.
    McIntosh C, Hamarneh G (2009) Optimal weights for convex functionals in medical image segmentation. In: Advances in visual computing. Springer, pp 1079–1088Google Scholar
  20. 20.
    Merritt SA, Rai L, Higgins WE (2006) Real-time CT-video registration for continuous endoscopic guidance. In: Medical imaging. International Society for Optics and Photonics, pp 614313–614313Google Scholar
  21. 21.
    Mewes PW, Neumann D, Licegevic O, Simon J, Juloski AL, Angelopoulou E (2011) Automatic region-of-interest segmentation and pathology detection in magnetically guided capsule endoscopy. In: Medical image computing and computer-assisted intervention (MICCAI 2011). Springer, pp 141–148Google Scholar
  22. 22.
    Nosrati MS, Andrews S, Hamarneh G (2013) Bounded labeling function for global segmentation of multi-part objects with geometric constraints. In: IEEE international conference on computer vision (ICCV), pp 2032–2039Google Scholar
  23. 23.
    Nosrati MS, Peyrat JM, Abinahed J, Al-Alao O, Al-Ansari A, Abugharbieh R, Hamarneh G (2014) Efficient multi-organ segmentation in multi-view endoscopic videos using pre-operative priors. In: Medical image computing and computer-assisted intervention (MICCAI), pp 324–331Google Scholar
  24. 24.
    Pentland A, Sclaroff S (1991) Closed-form solutions for physically based shape modeling and recognition. IEEE Trans Pattern Anal Mach Intell (IEEE TPAMI) 7:715–729CrossRefGoogle Scholar
  25. 25.
    Pickering MR, Muhit AA, Scarvell JM, Smith PN (2009) A new multi-modal similarity measure for fast gradient-based 2D–3D image registration. In: Engineering in medicine and biology society. EMBC 2009. Annual international conference of the IEEE. IEEE, pp 5821–5824Google Scholar
  26. 26.
    Pratt P, Mayer E, Vale J, Cohen D, Edwards E, Darzi A, Yang GZ (2012) An effective visualisation and registration system for image-guided robotic partial nephrectomy. J Robot Surg 6(1):23–31CrossRefGoogle Scholar
  27. 27.
    Puerto-Souza GA, Mariottini GL (2013) Toward long-term and accurate augmented-reality display for minimally-invasive surgery. In: IEEE international conference on robotics and automation (ICRA). IEEE, pp 5384–5389Google Scholar
  28. 28.
    Singh I (2009) Robot-assisted laparoscopic partial nephrectomy: current review of the technique and literature. J Min Access Surg 5(4):87Google Scholar
  29. 29.
    Teber D et al (2009) Augmented reality: a new tool to improve surgical accuracy during laparoscopic partial nephrectomy? Preliminary in vitro and in vivo results. Eur Urol 56(2):332–338CrossRefPubMedGoogle Scholar
  30. 30.
    Tobis S et al (2011) Near infrared fluorescence imaging with robotic assisted laparoscopic partial nephrectomy: initial clinical experience for renal cortical tumors. J Urol 186(1):47–52CrossRefPubMedGoogle Scholar
  31. 31.
    Urban BA et al (2001) Three-dimensional volume-rendered CT angiography of the renal arteries and veins: normal anatomy, variants, and clinical applications. RadioGraphics 21(2):373–386CrossRefPubMedGoogle Scholar
  32. 32.
    Yim Y, Wakid M, Kirmizibayrak C, Bielamowicz S, Hahn J (2010) Registration of 3D CT data to 2D endoscopic image using a gradient mutual information based viewpoint matching for image-guided medialization laryngoplasty. J Comput Sci Eng 4(4):368–387CrossRefGoogle Scholar
  33. 33.
    Zikic D, Glocker B, Kutter O, Groher M, Komodakis N, Khamene A, Paragios N, Navab N (2010) Markov random field optimization for intensity-based 2D–3D registration. In: SPIE medical imaging. International Society for Optics and Photonics, pp 762334–762334Google Scholar

Copyright information

© CARS 2016

Authors and Affiliations

  • Masoud S. Nosrati
    • 1
  • Alborz Amir-Khalili
    • 2
  • Jean-Marc Peyrat
    • 3
  • Julien Abinahed
    • 3
  • Osama Al-Alao
    • 4
  • Abdulla Al-Ansari
    • 4
  • Rafeef Abugharbieh
    • 2
  • Ghassan Hamarneh
    • 1
  1. 1.Medical Image Analysis LabSimon Fraser UniversityBurnabyCanada
  2. 2.BiSICLUniversity of British ColumbiaVancouverCanada
  3. 3.Qatar Robotic Surgery CentreQatar Science and Technology ParkDohaQatar
  4. 4.Urology Department, Hamad General HospitalHamad Medical CorporationDohaQatar

Personalised recommendations