Weakly supervised convolutional LSTM approach for tool tracking in laparoscopic videos
- 17 Downloads
Real-time surgical tool tracking is a core component of the future intelligent operating room (OR), because it is highly instrumental to analyze and understand the surgical activities. Current methods for surgical tool tracking in videos need to be trained on data in which the spatial positions of the tools are manually annotated. Generating such training data is difficult and time-consuming. Instead, we propose to use solely binary presence annotations to train a tool tracker for laparoscopic videos.
The proposed approach is composed of a CNN + Convolutional LSTM (ConvLSTM) neural network trained end to end, but weakly supervised on tool binary presence labels only. We use the ConvLSTM to model the temporal dependencies in the motion of the surgical tools and leverage its spatiotemporal ability to smooth the class peak activations in the localization heat maps (Lh-maps).
We build a baseline tracker on top of the CNN model and demonstrate that our approach based on the ConvLSTM outperforms the baseline in tool presence detection, spatial localization, and motion tracking by over \(5.0\%\), \(13.9\%\), and \(12.6\%\), respectively.
In this paper, we demonstrate that binary presence labels are sufficient for training a deep learning tracking model using our proposed method. We also show that the ConvLSTM can leverage the spatiotemporal coherence of consecutive image frames across a surgical video to improve tool presence detection, spatial localization, and motion tracking.
KeywordsSurgical workflow analysis Tool tracking Weak supervision Spatiotemporal coherence ConvLSTM Endoscopic videos
This work was supported by French state funds managed within the Investissements dAvenir program by BPI France (Project CONDOR) and by the ANR (References ANR-11-LABX-0004 and ANR-10-IAHU-02).
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest.
This article does not contain any studies with human participants or animals performed by any of the authors.
Statement of informed consent was not applicable since the manuscript does not contain any patient data.
- 2.Zisimopoulos O, Flouty E, Luengo I, Giataganas P, Nehme J, Chow A, Stoyanov D (2018)Deepphase: surgical phase recognition in cataracts videos. In: MICCAI, pp 265–272Google Scholar
- 3.Richa R, Balicki M, Meisner E, Sznitman R, Taylor R, Hager G (2011) Visual tracking of surgical tools for proximity detection in retinal surgery. In: IPCAI, pp 55–66Google Scholar
- 4.Sznitman R, Becker C, Fua P (2014) Fast part-based classification for instrument detection in minimally invasive surgery. In: MICCAI, pp 692–699Google Scholar
- 5.Vardazaryan A, Mutter D, Marescaux J, Padoy N (2018) Weakly-supervised learning for tool localization in laparoscopic videos. arXiv preprint. arXiv:1806.05573
- 6.Sznitman R, Basu A, Richa R, Handa J, Gehlbach P, Taylor RH, Jedynak B, Hager GD (2011) Unified detection and tracking in retinal microsurgery. In: MICCAI, pp 1–8Google Scholar
- 8.Jin A, Yeung S, Jopling J, Krause J, Azagury D, Milstein A, Fei-Fei L (2018) Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. In: WACV, pp 691–699Google Scholar
- 9.Singh KK, Lee YJ (2017) Hide-and-seek: forcing a network to be meticulous for weakly-supervised object and action localization. In: ICCVGoogle Scholar
- 10.Milan A, Rezatofighi SH, Dick AR, Reid ID, Schindler K (2017) Online multi-target tracking using recurrent neural networks. In: AAAI, vol 2, p 4Google Scholar
- 11.He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: CVPR, pp 770–778Google Scholar
- 13.Mishra K, Sathish R, Sheet D (2017) Learning latent temporal connectionism of deep residual visual abstractions for identifying surgical tools in laparoscopy procedures. In: CVPR workshops, pp 58–65Google Scholar
- 14.Hwang S, Kim HE (2016) Self-transfer learning for weakly supervised lesion localization. In: MICCAI,, pp 239–246Google Scholar
- 16.Zhou Y, Zhu Y, Ye Q, Qiu Q, Jiao J (2018) Weakly supervised instance segmentation using class peak response. arXiv preprint. arXiv:1804.00880
- 18.Sznitman R, Ali K, Richa R, Taylor RH, Hager GD, Fua P (2012) Data-driven visual tracking in retinal microsurgery. In: MICCAI, pp 568–575Google Scholar
- 20.Lo BP, Darzi A, Yang GZ (2003) Episode classification for the analysis of tissue/instrument interaction with multiple visual cues. In: MICCAI, pp 230–237Google Scholar
- 21.Luo W, Yang B, Urtasun R (2018) Fast and furious: real time end-to-end 3d detection, tracking and motion forecasting with a single convolutional net. In: CVPR, pp 3569–3577Google Scholar
- 22.Liu M, Zhu M (2017) Mobile video object detection with temporally-aware feature maps. arXiv preprint. arXiv:1711.06368
- 23.Durand T, Mordan T, Thome N, Cord M (2017) Wildcat: weakly supervised learning of deep convnets for image classification, pointwise localization and segmentation. In: CVPR, vol 2Google Scholar
- 25.Wei H, Zhou H, Sankaranarayanan J, Sengupta S, Samet H (2018) Residual convolutional lstm for tweet count prediction. In: Companion of the the web conference 2018 on the web conference 2018, pp 1309–1316. International World Wide Web Conferences Steering CommitteeGoogle Scholar