Skip to main content
Log in

Weakly supervised convolutional LSTM approach for tool tracking in laparoscopic videos

  • Original Article
  • Published:
International Journal of Computer Assisted Radiology and Surgery Aims and scope Submit manuscript

Abstract

Purpose

Real-time surgical tool tracking is a core component of the future intelligent operating room (OR), because it is highly instrumental to analyze and understand the surgical activities. Current methods for surgical tool tracking in videos need to be trained on data in which the spatial positions of the tools are manually annotated. Generating such training data is difficult and time-consuming. Instead, we propose to use solely binary presence annotations to train a tool tracker for laparoscopic videos.

Methods

The proposed approach is composed of a CNN + Convolutional LSTM (ConvLSTM) neural network trained end to end, but weakly supervised on tool binary presence labels only. We use the ConvLSTM to model the temporal dependencies in the motion of the surgical tools and leverage its spatiotemporal ability to smooth the class peak activations in the localization heat maps (Lh-maps).

Results

We build a baseline tracker on top of the CNN model and demonstrate that our approach based on the ConvLSTM outperforms the baseline in tool presence detection, spatial localization, and motion tracking by over \(5.0\%\), \(13.9\%\), and \(12.6\%\), respectively.

Conclusions

In this paper, we demonstrate that binary presence labels are sufficient for training a deep learning tracking model using our proposed method. We also show that the ConvLSTM can leverage the spatiotemporal coherence of consecutive image frames across a surgical video to improve tool presence detection, spatial localization, and motion tracking.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. This can only arise for the grasper in this dataset.

References

  1. Twinanda AP, Shehata S, Mutter D, Marescaux J, De Mathelin M, Padoy N (2017) Endonet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans Med Imaging 36(1):86–97

    Article  PubMed  Google Scholar 

  2. Zisimopoulos O, Flouty E, Luengo I, Giataganas P, Nehme J, Chow A, Stoyanov D (2018)Deepphase: surgical phase recognition in cataracts videos. In: MICCAI, pp 265–272

  3. Richa R, Balicki M, Meisner E, Sznitman R, Taylor R, Hager G (2011) Visual tracking of surgical tools for proximity detection in retinal surgery. In: IPCAI, pp 55–66

  4. Sznitman R, Becker C, Fua P (2014) Fast part-based classification for instrument detection in minimally invasive surgery. In: MICCAI, pp 692–699

  5. Vardazaryan A, Mutter D, Marescaux J, Padoy N (2018) Weakly-supervised learning for tool localization in laparoscopic videos. arXiv preprint. arXiv:1806.05573

  6. Sznitman R, Basu A, Richa R, Handa J, Gehlbach P, Taylor RH, Jedynak B, Hager GD (2011) Unified detection and tracking in retinal microsurgery. In: MICCAI, pp 1–8

  7. Al Hajj H, Lamard M, Conze PH, Cochener B, Quellec G (2018) Monitoring tool usage in surgery videos using boosted convolutional and recurrent neural networks. Med Image Anal 47:203–218

    Article  PubMed  Google Scholar 

  8. Jin A, Yeung S, Jopling J, Krause J, Azagury D, Milstein A, Fei-Fei L (2018) Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. In: WACV, pp 691–699

  9. Singh KK, Lee YJ (2017) Hide-and-seek: forcing a network to be meticulous for weakly-supervised object and action localization. In: ICCV

  10. Milan A, Rezatofighi SH, Dick AR, Reid ID, Schindler K (2017) Online multi-target tracking using recurrent neural networks. In: AAAI, vol 2, p 4

  11. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: CVPR, pp 770–778

  12. Kuhn HW (1955) The Hungarian method for the assignment problem. Naval Res Logist Q 2(1–2):83–97

    Article  Google Scholar 

  13. Mishra K, Sathish R, Sheet D (2017) Learning latent temporal connectionism of deep residual visual abstractions for identifying surgical tools in laparoscopy procedures. In: CVPR workshops, pp 58–65

  14. Hwang S, Kim HE (2016) Self-transfer learning for weakly supervised lesion localization. In: MICCAI,, pp 239–246

  15. Jia Z, Huang X, Eric I, Chang C, Xu Y (2017) Constrained deep weak supervision for histopathology image segmentation. IEEE Trans Med Imaging 36(11):2376–2388

    Article  PubMed  Google Scholar 

  16. Zhou Y, Zhu Y, Ye Q, Qiu Q, Jiao J (2018) Weakly supervised instance segmentation using class peak response. arXiv preprint. arXiv:1804.00880

  17. Rieke N, Tan DJ, di San Filippo CA, Tombari F, Alsheakhali M, Belagiannis V, Eslami A, Navab N (2016) Real-time localization of articulated surgical instruments in retinal microsurgery. Med Image Anal 34:82–100

    Article  PubMed  Google Scholar 

  18. Sznitman R, Ali K, Richa R, Taylor RH, Hager GD, Fua P (2012) Data-driven visual tracking in retinal microsurgery. In: MICCAI, pp 568–575

  19. Bouget D, Allan M, Stoyanov D, Jannin P (2017) Vision-based and marker-less surgical tool detection and tracking: a review of the literature. Med Image Anal 35:633–654

    Article  PubMed  Google Scholar 

  20. Lo BP, Darzi A, Yang GZ (2003) Episode classification for the analysis of tissue/instrument interaction with multiple visual cues. In: MICCAI, pp 230–237

  21. Luo W, Yang B, Urtasun R (2018) Fast and furious: real time end-to-end 3d detection, tracking and motion forecasting with a single convolutional net. In: CVPR, pp 3569–3577

  22. Liu M, Zhu M (2017) Mobile video object detection with temporally-aware feature maps. arXiv preprint. arXiv:1711.06368

  23. Durand T, Mordan T, Thome N, Cord M (2017) Wildcat: weakly supervised learning of deep convnets for image classification, pointwise localization and segmentation. In: CVPR, vol 2

  24. Otsu N (1979) A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern 9(1):62–66

    Article  Google Scholar 

  25. Wei H, Zhou H, Sankaranarayanan J, Sengupta S, Samet H (2018) Residual convolutional lstm for tweet count prediction. In: Companion of the the web conference 2018 on the web conference 2018, pp 1309–1316. International World Wide Web Conferences Steering Committee

  26. Bernardin K, Stiefelhagen R (2008) Evaluating multiple object tracking performance: the clear mot metrics. J Image Video Process 2008:1

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by French state funds managed within the Investissements dAvenir program by BPI France (Project CONDOR) and by the ANR (References ANR-11-LABX-0004 and ANR-10-IAHU-02).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chinedu Innocent Nwoye.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Informed consent

Statement of informed consent was not applicable since the manuscript does not contain any patient data.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nwoye, C.I., Mutter, D., Marescaux, J. et al. Weakly supervised convolutional LSTM approach for tool tracking in laparoscopic videos. Int J CARS 14, 1059–1067 (2019). https://doi.org/10.1007/s11548-019-01958-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11548-019-01958-6

Keywords

Navigation