Skip to main content
Log in

Automatic annotation of surgical activities using virtual reality environments

  • Original Article
  • Published:
International Journal of Computer Assisted Radiology and Surgery Aims and scope Submit manuscript

Abstract

Purpose

Annotation of surgical activities becomes increasingly important for many recent applications such as surgical workflow analysis, surgical situation awareness, and the design of the operating room of the future, especially to train machine learning methods in order to develop intelligent assistance. Currently, annotation is mostly performed by observers with medical background and is incredibly costly and time-consuming, creating a major bottleneck for the above-mentioned technologies. In this paper, we propose a way to eliminate, or at least limit, the human intervention in the annotation process.

Methods

Meaningful information about interaction between objects is inherently available in virtual reality environments. We propose a strategy to convert automatically this information into annotations in order to provide as output individual surgical process models.

Validation

We implemented our approach through a peg-transfer task simulator and compared it to manual annotations. To assess the impact of our contribution, we studied both intra- and inter-observer variability.

Results and conclusion

In average, manual annotations took more than 12 min for 1 min of video to achieve low-level physical activity annotation, whereas automatic annotation is achieved in less than a second for the same video period. We also demonstrated that manual annotation introduced mistakes as well as intra- and inter-observer variability that our method is able to suppress due to the high precision and reproducibility.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Huaulmé A, Harada K, Forestier G, Mitsuishi M, Jannin P (2018) Sequential surgical signatures in micro-suturing task. Int J Comput Assist Radiol Surg 13(9):1–10. https://doi.org/10.1007/s11548-018-1775-x. ISSN 1861-6410, 1861-6429

    Article  Google Scholar 

  2. Forestier G, Riffaud L, Petitjean F, Henaux P-L, Jannin P (May 2018) Surgical skills: can learning curves be computed from recordings of surgical activities? Int J Comput Assist Radiol Surg 13(5):629–636. https://doi.org/10.1007/s11548-018-1713-y. ISSN 1861-6410, 1861-6429

    Article  Google Scholar 

  3. Sandberg WS, Daily B, Egan M, Stahl JE, Goldman JM, Wiklund RA, Rattner D (2005) Deliberate perioperative systems design improves operating room throughput. Anesthesiology 103(2):406–418. https://doi.org/10.1097/00000542-200508000-00025. ISSN 0003-3022

    Article  Google Scholar 

  4. Bhatia B, Oates T, Xiao Y, Hu P (2007) Real-time identification of operating room state from video, vol 2, pp 1761–1766

  5. Ko S-Y, Kim J, Lee W-J, Kwon D-S (2007) Surgery task model for intelligent interaction between surgeon and laparoscopic assistant robot. Int J Assitive Robotics Mechatron 8(1):38–46

    Google Scholar 

  6. Quellec G, Lamard M, Cochener B, Cazuguel G (2015) Real-time task recognition in cataract surgery videos using adaptive spatiotemporal polynomials. IEEE Trans Med Imag 34(4):877–887. https://doi.org/10.1109/TMI.2014.2366726, ISSN 0278-0062

    Article  Google Scholar 

  7. Lalys F, Jannin P (2013) Surgical process modelling: a review. Int J Comput Assist Radiol Surg 9(3):495–511

    Article  Google Scholar 

  8. Despinoy F, Bouget D, Forestier G, Penet C, Zemiti N, Poignet P, Jannin P (2015) Unsupervised trajectory segmentation for surgical gesture recognition in robotic training. IEEE Trans Biomed Eng 63(6):1280–1291

    Article  Google Scholar 

  9. Neumuth T, Wiedemann R, Foja C, Meier P, Schlomberg J, Neumuth D, Wiedemann P (2010) Identification of surgeon-individual treatment profiles to support the provision of an optimum treatment service for cataract patients. J Ocul Biol Dis Inform 3(2):73–83

    Article  Google Scholar 

  10. Padoy N, Blum T, Ahmadi S-A, Feussner H, Berger M-O, Navab N (2010) Statistical modeling and recognition of surgical workflow. Med Image Anal 16(3):632–641

    Article  Google Scholar 

  11. Twinanda AP, Shehata S, Mutter D, Marescaux J, de Mathelin M, Padoy N (2017) EndoNet: A deep architecture for recognition tasks on laparoscopic videos. IEEE Trans Med Imag 36(1):86–97. https://doi.org/10.1109/TMI.2016.2593957. ISSN 0278-0062

    Article  Google Scholar 

  12. Bouarfa L, Jonker PP, Dankelman J (2011) Discovery of high-level tasks in the operating room. J Biomed Inform 44(3):455–462

    Article  CAS  Google Scholar 

  13. James A, Vieira D, Lo B, Darzi A, Yang G-Z (2007) Eye-gaze driven surgical workflow segmentation. Med Image Comput Comput-Assist Interv–MICCAI 2007, pp 110–117

  14. Lalys F, Bouget D, Riffaud L, Jannin P (2012) Automatic knowledge-based recognition of low-level tasks in ophthalmological procedures. Int J Comput Assist Radiol Surg 8(1):39–49

    Article  Google Scholar 

  15. Gao Y, Vedula SS, Reiley CE, Ahmidi N, Varadarajan B, Lin HC, Tao L, Zappella L, Bejar B, Yuh DD, Chen CCG, Vidal R, Khudanpur S, Hager GD (2014) JHU-ISI gesture and skill assessment working set (JIGSAWS): a surgical activity dataset for human motion modeling. Model Monit Comput Assist Interv (M2CAI)—MICCAI workshop, p 10

  16. Sarikaya D, Corso JJ, Guru KA (2017) Detection and localization of robotic tools in robot-assisted surgery videos using deep neural networks for region proposal and detection. IEEE Trans Med Imag 36(7):1542–1549. https://doi.org/10.1109/TMI.2017.2665671. ISSN 0278-0062

    Article  Google Scholar 

  17. Zisimopoulos O, Flouty E, Stacey M, Muscroft S, Giataganas P, Nehme J, Chow A, Stoyanov D (September 2017) Can surgical simulation be used to train detection and classification of neural networks? Healthc Technol Lett 4(5):216–222. https://doi.org/10.1049/htl.2017.0064. ISSN 2053-3713

    Article  Google Scholar 

  18. Heredia Perez SA, Harada K, Mitsuishi M (2018) Haptic assistance for robotic surgical simulation. In: 27th Annual congress of Japan society of computer aided surgery, vol 20. No. 4, pp 232–233

  19. Derossis AM, Fried GM, Abrahamowicz M, Sigman HH, Barkun JS, Meakins JL (1998) Development of a model for training and evaluation of laparoscopic skills. Am J Surg 175(6):482–487. ISSN 0002-9610

  20. Garraud C, Gibaud B, Penet C, Gazuguel G, Dardenne G, Jannin P (2014) An ontology-based software suite for the analysis of surgical process model. In: Proceedings of Surgetica’2014, Chambery, France, pp 243–245

Download references

Acknowledgements

This work was funded by ImPACT Program of Council for Science, Technology and Innovation, Cabinet Office, Government of Japan. Authors thanks the IRT b\(<>\)com for the provision of the software “Surgery Workflow Toolbox [annotated]”, used for this work. Authors especially thank Ms. M. Le Duff, Mr. A. Derathé, Mr. T. Dognon, Mr. E. Maguet and Mr. B. Ndack for their help to the data annotation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arnaud Huaulmé.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was funded by ImPACT Program of Council for Science, Technology and Innovation, Cabinet Office, Government of Japan.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 135 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huaulmé, A., Despinoy, F., Perez, S.A.H. et al. Automatic annotation of surgical activities using virtual reality environments. Int J CARS 14, 1663–1671 (2019). https://doi.org/10.1007/s11548-019-02008-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11548-019-02008-x

Keywords

Navigation