Advertisement

Automatic knowledge-based recognition of low-level tasks in ophthalmological procedures

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

Purpose

Surgical process models (SPMs) have recently been created for situation-aware computer-assisted systems in the operating room. One important challenge in this area is the automatic acquisition of SPMs. The purpose of this study is to present a new method for the automatic detection of low-level surgical tasks, that is, the sequence of activities in a surgical procedure, from microscope video images only. The level of granularity that we addressed in this work is symbolized by activities formalized by triplets <action, surgical tool, anatomical structure> .

Methods

Using the results of our latest work on the recognition of surgical phases in cataract surgeries, and based on the hypothesis that most activities occur in one or two phases only, we created a light-weight ontology, formalized as a hierarchical decomposition into phases and activities. Information concerning the surgical tools, the areas where tools are used and three other visual cues were detected through an image-based approach and combined with the information of the current surgical phase within a knowledge-based recognition system. Knowing the surgical phases before the activity, recognition allows supervised classification to be adapted to the phase. Multiclass Support Vector Machines were chosen as a classification algorithm.

Results

Using a dataset of 20 cataract surgeries, and identifying 25 possible pairs of activities, a frame-by-frame recognition rate of 64.5 % was achieved with the proposed system.

Conclusions

The addition of human knowledge to traditional bottom-up approaches based on image analysis appears to be promising for low-level task detection. The results of this work could be used for the automatic indexation of post-operative videos.

This is a preview of subscription content, log in to check access.

Access options

Buy single article

Instant unlimited access to the full article PDF.

US$ 39.95

Price includes VAT for USA

Subscribe to journal

Immediate online access to all issues from 2019. Subscription will auto renew annually.

US$ 99

This is the net price. Taxes to be calculated in checkout.

References

  1. 1

    Neumuth T, Trantakis C, Eckhardt F, Dengl M, Meixensberger J, Burgert O (2007) Supporting the analysis of inter-vention courses with surgical process models on the example of fourteen microsurgical lumbar discectomies. Int J Comput Assist Radiol Surg 2(1): 436–438

  2. 2

    Payne PRO, Mendonca EA, Johnson SB, Starren JB (2007) Conceptual knowledge acquisition in biomedicine: a methodological review. J Biomed Inform 40(5): 582–602

  3. 3

    Katic D, Sudra G, Speidel S, Castrillon-Oberndorfer G, Eggers G, Dillmann R (2010) Knowledge-based situation interpretation for context-aware augmented reality in dental implant surgery. Med Imaging Augment Real LNCS 6326: 531–540

  4. 4

    Kragic D, Hager GD (2003) Task modelling and specification for modular sensory based human-machine cooperative systems. Intell Robots Syst 3: 3192–3197

  5. 5

    Speidel S, Sudra G, Senemaud J, Drentschew M, Müller-stich BP, Gun C, Dillmann R (2008) Situation modeling and situation recognition for a context-aware augmented reality system. Prog Biomed Opt Imaging 9(1): 35

  6. 6

    Agarwal S, Joshi A, Finin T, Yesha Y, Ganous T (2007) A pervasive computing system for the operating room of the future. Mobile Netw Appl 12(2,3): 215–228

  7. 7

    Houliston BR, Parry DT, Merry AF (2011) TADAA: towards automated detection of anaesthetic activity. Methods Inf Med 50(5): 464–471

  8. 8

    James A, Vieira D, Lo BPL, Darzi A, Yang, G-Z (2007) Eye-gaze driven surgical workflow segmentation. In: Proceedings of medical image computing and computer-assisted intervention (MICCAI), pp 110–117

  9. 9

    Miyawaki F, Masamune K, Suzuki S, Yoshimitsu K, Vain J (2005) Scrub nurse and timed-automata-based model for surgery. IEEE Ind Electron Trans 5(52): 1227–1235

  10. 10

    Nara A, Izumi K, Iseki H, Suzuki T, Nambu K, Sakurai Y (2011) Surgical workflow monitoring based on trajectory data mining. New Frontiers in Artificial Intelligence, LNCS 6797, 283–291

  11. 11

    Yoshimitsu K, Masamune K, Iseki H, Fukui Y, Hashimoto D, Miyawaki F (2010) International symposium on development of scrub nurse robot (SNR) systems for endoscopic and laparoscopic surgery. Micro-NanoMechatron Hum Sci:83–88

  12. 12

    Hu P, Ho D, MacKenzie CF, Hu H, Martz D, Jacobs J, Voit R, Xiao Y (2006) Advanced visualization platform for surgical operating room coordination. Distributed video board system. Surg Innov 13(2): 129–135

  13. 13

    Xiao Y, Hu P, Moss J, de Winter JCF, Venekamp D, MacKenzie CF, Seagull FJ, Perkins S (2008) Opportunities and challenges in improving surgical work flow. Cogn Technol 10(4): 313–321

  14. 14

    Sandberg WS, Daily B, Egan MT, Stahl JE, Goldman JM, Wiklund RA, Rattner D (2005) Deliberate perioperative systems design improves operating room throughput. Anesthesiology 103: 406–418

  15. 15

    Munchenberg J, Brief J, Raczkowsky J, Wörn H, Hassfeld S, Mühling J (2001) Operation planning of robot supported surgical interventions. International conference on intelligent robots and systems IEEE/RSJ, Takamatsu, Japan, pp 547–552

  16. 16

    Ko SY, Kim J, Lee WJ, Kwon DS (2007) Surgery task model for intelligent interaction between surgeon and laparoscopic assistant robot. Int J Assist Robot Mechatron 8(1): 38–46

  17. 17

    Lin HC, Shafran I, Yuh D, Hager GD (2006) Towards automatic skill evaluation: detection and segmentation of robot-assisted surgical motions. Comput Aided Surg 11(5): 220–230

  18. 18

    Reiley CE, Lin HC, Varadarajan B, Khudanpur S, Yuh DD, Hager GD (2008) Automatic recognition of surgical motions using statistical modeling for capturing variability. MMVR 132: 396–401

  19. 19

    Voros S, Hager GD (2008) International symposium on towards “real-time” tool-tissue interaction detection in robotically assisted laparoscopy. Biomed Robot Biomechatron: 562–567

  20. 20

    Blum T, Padoy N, Feussner H, Navab N (2008) mining for visualization and analysis of surgeries. Int J Comput Assist Radiol Surg 3(5): 379–386

  21. 21

    Lo B, Darzi A, Yang G (2003) Episode classification for the analysis of tissue-instrument interaction with multiple visual cues. International conference on medical image computing and computer-assisted intervention

  22. 22

    Klank U, Padoy N, Feussner H, Navab N (2008) Automatic feature generation in endoscopic images. Int J Comput Assist Radiol Surg 3(3–4): 331–339

  23. 23

    Padoy N, Blum T, Feuner H, Berger MO, Navab N (2008) On-line recognition of surgical activity for monitoring in the operating room. In: Proceedings of the 20th conference on innovative applications of artificial intelligence

  24. 24

    Lalys F, Riffaud L, Bouget D, Jannin P (2012) A framework for the recognition of high-level surgical tasks from video images for cataract surgeries. IEEE Trans Biomed Eng 59(4): 966–976

  25. 25

    Bhatia B, Oates T, Xiao Y, Hu P (2007) Real-time identification of operating room state from video. In:AAAI, vol 2, pp 1761–1766

  26. 26

    Suzuki T, Sakurai Y, Yoshimitsu K, Nambu K, Muragaki Y, Iseki H (2010) Intraoperative multichannel audio-visual information recording and automatic surgical phase and incident detection. 32nd annual international conference of the IEEE EMBS, pp 1190–1193

  27. 27

    Neumuth T, Strauß G, Meixensberger J, Lemke HU, Burgert O (2006) Acquisition of process descriptions from surgical interventions. In DEXA 2006: Proceedings of 17th international conference on database and expert systems applications, pp 602–611

  28. 28

    Bouget D, Lalys F, Jannin P (2012) Surgical tools recognition and pupil segmentation for cataract surgery modelling. In: Proceedings of MMVR

  29. 29

    Bay H, Tuytelaars T, Van Gool, Luc (2006) SURF: speeded up robust features. Comput Vis ECCV 404–417

  30. 30

    Hough VC (1959) Machine analysis of bubble chamber pictures. In: Proceedings of international conference high energy accelerators and instrumentation

  31. 31

    Crammer K, Singer Y (2001) On the algorithmic implementation of multi-class SVMs. JMLR 2: 265–292

  32. 32

    Laptev I, Lindeberg T (2006) Local descriptors for spatio-temporal recognition. Spatial coherence for visual motion analysis. Springer, Berlin

  33. 33

    Beauchemin SS, Barron JL (1995) The computation of optical flow. ACM, New York

Download references

Author information

Correspondence to Florent Lalys.

Electronic Supplementary Material

The Below is the Electronic Supplementary Material.

ESM 1 (JPG 146 kb)

ESM (MPG 27,581 kb)

ESM (MPG 27,581 kb)

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Lalys, F., Bouget, D., Riffaud, L. et al. Automatic knowledge-based recognition of low-level tasks in ophthalmological procedures. Int J CARS 8, 39–49 (2013). https://doi.org/10.1007/s11548-012-0685-6

Download citation

Keywords

  • Surgical workflow
  • Activity detection
  • Surgical ontology
  • Image-based analysis
  • Surgical process model