Advertisement

Weakly supervised convolutional LSTM approach for tool tracking in laparoscopic videos

  • Chinedu Innocent NwoyeEmail author
  • Didier Mutter
  • Jacques Marescaux
  • Nicolas Padoy
Original Article
  • 17 Downloads

Abstract

Purpose

Real-time surgical tool tracking is a core component of the future intelligent operating room (OR), because it is highly instrumental to analyze and understand the surgical activities. Current methods for surgical tool tracking in videos need to be trained on data in which the spatial positions of the tools are manually annotated. Generating such training data is difficult and time-consuming. Instead, we propose to use solely binary presence annotations to train a tool tracker for laparoscopic videos.

Methods

The proposed approach is composed of a CNN + Convolutional LSTM (ConvLSTM) neural network trained end to end, but weakly supervised on tool binary presence labels only. We use the ConvLSTM to model the temporal dependencies in the motion of the surgical tools and leverage its spatiotemporal ability to smooth the class peak activations in the localization heat maps (Lh-maps).

Results

We build a baseline tracker on top of the CNN model and demonstrate that our approach based on the ConvLSTM outperforms the baseline in tool presence detection, spatial localization, and motion tracking by over \(5.0\%\), \(13.9\%\), and \(12.6\%\), respectively.

Conclusions

In this paper, we demonstrate that binary presence labels are sufficient for training a deep learning tracking model using our proposed method. We also show that the ConvLSTM can leverage the spatiotemporal coherence of consecutive image frames across a surgical video to improve tool presence detection, spatial localization, and motion tracking.

Keywords

Surgical workflow analysis Tool tracking Weak supervision Spatiotemporal coherence ConvLSTM Endoscopic videos 

Notes

Acknowledgements

This work was supported by French state funds managed within the Investissements dAvenir program by BPI France (Project CONDOR) and by the ANR (References ANR-11-LABX-0004 and ANR-10-IAHU-02).

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Informed consent

Statement of informed consent was not applicable since the manuscript does not contain any patient data.

References

  1. 1.
    Twinanda AP, Shehata S, Mutter D, Marescaux J, De Mathelin M, Padoy N (2017) Endonet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans Med Imaging 36(1):86–97CrossRefGoogle Scholar
  2. 2.
    Zisimopoulos O, Flouty E, Luengo I, Giataganas P, Nehme J, Chow A, Stoyanov D (2018)Deepphase: surgical phase recognition in cataracts videos. In: MICCAI, pp 265–272Google Scholar
  3. 3.
    Richa R, Balicki M, Meisner E, Sznitman R, Taylor R, Hager G (2011) Visual tracking of surgical tools for proximity detection in retinal surgery. In: IPCAI, pp 55–66Google Scholar
  4. 4.
    Sznitman R, Becker C, Fua P (2014) Fast part-based classification for instrument detection in minimally invasive surgery. In: MICCAI, pp 692–699Google Scholar
  5. 5.
    Vardazaryan A, Mutter D, Marescaux J, Padoy N (2018) Weakly-supervised learning for tool localization in laparoscopic videos. arXiv preprint. arXiv:1806.05573
  6. 6.
    Sznitman R, Basu A, Richa R, Handa J, Gehlbach P, Taylor RH, Jedynak B, Hager GD (2011) Unified detection and tracking in retinal microsurgery. In: MICCAI, pp 1–8Google Scholar
  7. 7.
    Al Hajj H, Lamard M, Conze PH, Cochener B, Quellec G (2018) Monitoring tool usage in surgery videos using boosted convolutional and recurrent neural networks. Med Image Anal 47:203–218CrossRefGoogle Scholar
  8. 8.
    Jin A, Yeung S, Jopling J, Krause J, Azagury D, Milstein A, Fei-Fei L (2018) Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. In: WACV, pp 691–699Google Scholar
  9. 9.
    Singh KK, Lee YJ (2017) Hide-and-seek: forcing a network to be meticulous for weakly-supervised object and action localization. In: ICCVGoogle Scholar
  10. 10.
    Milan A, Rezatofighi SH, Dick AR, Reid ID, Schindler K (2017) Online multi-target tracking using recurrent neural networks. In: AAAI, vol 2, p 4Google Scholar
  11. 11.
    He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: CVPR, pp 770–778Google Scholar
  12. 12.
    Kuhn HW (1955) The Hungarian method for the assignment problem. Naval Res Logist Q 2(1–2):83–97CrossRefGoogle Scholar
  13. 13.
    Mishra K, Sathish R, Sheet D (2017) Learning latent temporal connectionism of deep residual visual abstractions for identifying surgical tools in laparoscopy procedures. In: CVPR workshops, pp 58–65Google Scholar
  14. 14.
    Hwang S, Kim HE (2016) Self-transfer learning for weakly supervised lesion localization. In: MICCAI,, pp 239–246Google Scholar
  15. 15.
    Jia Z, Huang X, Eric I, Chang C, Xu Y (2017) Constrained deep weak supervision for histopathology image segmentation. IEEE Trans Med Imaging 36(11):2376–2388CrossRefGoogle Scholar
  16. 16.
    Zhou Y, Zhu Y, Ye Q, Qiu Q, Jiao J (2018) Weakly supervised instance segmentation using class peak response. arXiv preprint. arXiv:1804.00880
  17. 17.
    Rieke N, Tan DJ, di San Filippo CA, Tombari F, Alsheakhali M, Belagiannis V, Eslami A, Navab N (2016) Real-time localization of articulated surgical instruments in retinal microsurgery. Med Image Anal 34:82–100CrossRefGoogle Scholar
  18. 18.
    Sznitman R, Ali K, Richa R, Taylor RH, Hager GD, Fua P (2012) Data-driven visual tracking in retinal microsurgery. In: MICCAI, pp 568–575Google Scholar
  19. 19.
    Bouget D, Allan M, Stoyanov D, Jannin P (2017) Vision-based and marker-less surgical tool detection and tracking: a review of the literature. Med Image Anal 35:633–654CrossRefGoogle Scholar
  20. 20.
    Lo BP, Darzi A, Yang GZ (2003) Episode classification for the analysis of tissue/instrument interaction with multiple visual cues. In: MICCAI, pp 230–237Google Scholar
  21. 21.
    Luo W, Yang B, Urtasun R (2018) Fast and furious: real time end-to-end 3d detection, tracking and motion forecasting with a single convolutional net. In: CVPR, pp 3569–3577Google Scholar
  22. 22.
    Liu M, Zhu M (2017) Mobile video object detection with temporally-aware feature maps. arXiv preprint. arXiv:1711.06368
  23. 23.
    Durand T, Mordan T, Thome N, Cord M (2017) Wildcat: weakly supervised learning of deep convnets for image classification, pointwise localization and segmentation. In: CVPR, vol 2Google Scholar
  24. 24.
    Otsu N (1979) A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern 9(1):62–66CrossRefGoogle Scholar
  25. 25.
    Wei H, Zhou H, Sankaranarayanan J, Sengupta S, Samet H (2018) Residual convolutional lstm for tweet count prediction. In: Companion of the the web conference 2018 on the web conference 2018, pp 1309–1316. International World Wide Web Conferences Steering CommitteeGoogle Scholar
  26. 26.
    Bernardin K, Stiefelhagen R (2008) Evaluating multiple object tracking performance: the clear mot metrics. J Image Video Process 2008:1CrossRefGoogle Scholar

Copyright information

© CARS 2019

Authors and Affiliations

  1. 1.ICubeUniversity of Strasbourg, CNRS, IHUStrasbourgFrance
  2. 2.University Hospital of Strasbourg, IRCAD, IHUStrasbourgFrance

Personalised recommendations