Prediction of laparoscopic procedure duration using unlabeled, multimodal sensor data

  • Sebastian BodenstedtEmail author
  • Martin Wagner
  • Lars Mündermann
  • Hannes Kenngott
  • Beat Müller-Stich
  • Michael Breucha
  • Sören Torge Mees
  • Jürgen Weitz
  • Stefanie Speidel
Original Article



The course of surgical procedures is often unpredictable, making it difficult to estimate the duration of procedures beforehand. This uncertainty makes scheduling surgical procedures a difficult task. A context-aware method that analyses the workflow of an intervention online and automatically predicts the remaining duration would alleviate these problems. As basis for such an estimate, information regarding the current state of the intervention is a requirement.


Today, the operating room contains a diverse range of sensors. During laparoscopic interventions, the endoscopic video stream is an ideal source of such information. Extracting quantitative information from the video is challenging though, due to its high dimensionality. Other surgical devices (e.g., insufflator, lights, etc.) provide data streams which are, in contrast to the video stream, more compact and easier to quantify. Though whether such streams offer sufficient information for estimating the duration of surgery is uncertain. In this paper, we propose and compare methods, based on convolutional neural networks, for continuously predicting the duration of laparoscopic interventions based on unlabeled data, such as from endoscopic image and surgical device streams.


The methods are evaluated on 80 recorded laparoscopic interventions of various types, for which surgical device data and the endoscopic video streams are available. Here the combined method performs best with an overall average error of 37% and an average halftime error of approximately 28%.


In this paper, we present, to our knowledge, the first approach for online procedure duration prediction using unlabeled endoscopic video data and surgical device data in a laparoscopic setting. Furthermore, we show that a method incorporating both vision and device data performs better than methods based only on vision, while methods only based on tool usage and surgical device data perform poorly, showing the importance of the visual channel.


Surgical workflow analyses Duration prediction SensorOR 


Compliance with ethical standards

Conflict of interest

S. Bodenstedt, M. Wagner, L. Mündermann, H. Kenngott, B. Müller-Stich, M. Breucha, S. Mees, J. Weitz and S. Speidel declare that they have no conflict of interest.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards

Informed consent

Informed consent was obtained from the study participants


  1. 1.
    Aksamentov I, Twinanda AP, Mutter D, Marescaux J, Padoy N (2017) Deep neural networks predict remaining surgery duration from cholecystectomy videos. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 586–593Google Scholar
  2. 2.
    Blum T, Feußner H, Navab N (2010) Modeling and segmentation of surgical workflow from laparoscopic video. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 400–407Google Scholar
  3. 3.
    Bodenstedt S (2018) Image-based scene analysis for computer-assisted laparoscopic surgery. Ph.D. thesis, Karlsruhe Institute of Technology.
  4. 4.
    Bodenstedt S, Wagner M, Katić D, Mietkowski P, Mayer B, Kenngott H, Müller-Stich B, Dillmann R, Speidel S (2017) Unsupervised temporal context learning using convolutional neural networks for laparoscopic workflow analysis. ArXiv e-printsGoogle Scholar
  5. 5.
    Cho K, van Merrienboer B, Bahdanau D, Bengio Y (2014) On the properties of neural machine translation: Encoder–decoder approaches. In: Proceedings of SSST-8, eighth workshop on syntax, semantics and structure in statistical translation. Association for Computational Linguistics, pp. 103–111.
  6. 6.
    Dergachyova O, Bouget D, Huaulmé A, Morandi X, Jannin P (2016) Automatic data-driven real-time segmentation and recognition of surgical workflow. Int J Comput Assist Radiol Surg 11(6):1081–1089CrossRefGoogle Scholar
  7. 7.
    Guédon ACP, Paalvast M, Meeuwsen FC, Tax DMJ, van Dijke AP, Wauben LSGL, van der Elst M, Dankelman J, van den Dobbelsteen JJ (2016) ‘it is time to prepare the next patient’ real-time prediction of procedure duration in laparoscopic cholecystectomies. J Med Syst 40(12):271. CrossRefGoogle Scholar
  8. 8.
    He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778Google Scholar
  9. 9.
    Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780CrossRefGoogle Scholar
  10. 10.
    Katić D, Wekerle AL, Gärtner F, Kenngott H, Müller-Stich BP, Dillmann R, Speidel S (2014) Knowledge-driven formalization of laparoscopic surgeries for rule-based intraoperative context-aware assistance. In: International conference on information processing in computer-assisted interventions. Springer, pp 158–167Google Scholar
  11. 11.
    Kingma D, Ba J (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980
  12. 12.
    Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105Google Scholar
  13. 13.
    Lea C, Choi JH, Reiter A, Hager G (2016) Surgical phase recognition: from instrumented ors to hospitals around the world. M2CAI 2016Google Scholar
  14. 14.
    Padoy N, Blum T, Ahmadi SA, Feussner H, Berger MO, Navab N (2012) Statistical modeling and recognition of surgical workflow. Med Image Anal 16(3):632–641CrossRefGoogle Scholar
  15. 15.
    Twinanda AP, Shehata S, Mutter D, Marescaux J, de Mathelin M, Padoy N (2017) Endonet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans Med Imag 36(1):86–97CrossRefGoogle Scholar
  16. 16.
    Twinanda AP, Yengera G, Mutter D, Marescaux J, Padoy N (2018) Rsdnet: learning to predict remaining surgery duration from laparoscopic videos without manual annotations. IEEE Trans Med Imag.

Copyright information

© CARS 2019

Authors and Affiliations

  • Sebastian Bodenstedt
    • 1
    Email author
  • Martin Wagner
    • 2
  • Lars Mündermann
    • 3
  • Hannes Kenngott
    • 2
  • Beat Müller-Stich
    • 2
  • Michael Breucha
    • 4
  • Sören Torge Mees
    • 4
  • Jürgen Weitz
    • 4
  • Stefanie Speidel
    • 1
  1. 1.Department for Translational Surgical OncologyNational Center for Tumor Diseases (NCT)DresdenGermany
  2. 2.Department of General, Visceral and Transplant SurgeryUniversity of HeidelbergHeidelbergGermany
  3. 3.KARL STORZ SE & Co. KGTuttlingenGermany
  4. 4.Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav CarusTU DresdenDresdenGermany

Personalised recommendations