Advertisement

Automated surgical skill assessment in RMIS training

  • Aneeq Zia
  • Irfan Essa
Original Article

Abstract

Purpose

Manual feedback in basic robot-assisted minimally invasive surgery (RMIS) training can consume a significant amount of time from expert surgeons’ schedule and is prone to subjectivity. In this paper, we explore the usage of different holistic features for automated skill assessment using only robot kinematic data and propose a weighted feature fusion technique for improving score prediction performance. Moreover, we also propose a method for generating ‘task highlights’ which can give surgeons a more directed feedback regarding which segments had the most effect on the final skill score.

Methods

We perform our experiments on the publicly available JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) and evaluate four different types of holistic features from robot kinematic data—sequential motion texture (SMT), discrete Fourier transform (DFT), discrete cosine transform (DCT) and approximate entropy (ApEn). The features are then used for skill classification and exact skill score prediction. Along with using these features individually, we also evaluate the performance using our proposed weighted combination technique. The task highlights are produced using DCT features.

Results

Our results demonstrate that these holistic features outperform all previous Hidden Markov Model (HMM)-based state-of-the-art methods for skill classification on the JIGSAWS dataset. Also, our proposed feature fusion strategy significantly improves performance for skill score predictions achieving up to 0.61 average spearman correlation coefficient. Moreover, we provide an analysis on how the proposed task highlights can relate to different surgical gestures within a task.

Conclusions

Holistic features capturing global information from robot kinematic data can successfully be used for evaluating surgeon skill in basic surgical tasks on the da Vinci robot. Using the framework presented can potentially allow for real-time score feedback in RMIS training and help surgical trainees have more focused training.

Keywords

Robot-assisted surgery Surgical skill assessment Feature fusion 

Notes

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.

Informed consent

Informed consent was obtained from all individual participants included in the study.

References

  1. 1.
    Martin J, Regehr G, Reznick R, MacRae H, Murnaghan J, Hutchison C, Brown M (1997) Objective structured assessment of technical skill (osats) for surgical residents. Br J Surg 84(2):273–278CrossRefPubMedGoogle Scholar
  2. 2.
    Reiley CE, Hager GD (2009) Decomposition of robotic surgical tasks: an analysis of subtasks and their correlation to skill. In: M2CAI workshop. MICCAI, LondonGoogle Scholar
  3. 3.
    Haro BB, Zappella L, Vidal R (2012) Surgical gesture classification from video data. In: MICCAI 2012. Springer, pp 34–41Google Scholar
  4. 4.
    DiPietro R, Lea C, Malpani A, Ahmidi N, Vedula SS, Lee GI, Lee MR, Hager GD (2016) Recognizing surgical activities with recurrent neural networks. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 551–558Google Scholar
  5. 5.
    Ahmidi N, Tao L, Sefati S, Gao Y, Lea C, Bejar B, Zappella L, Khudanpur S, Vidal R, Hager G (2017) A dataset and benchmarks for segmentation and recognition of gestures in robotic surgery. IEEE Trans Bio Med Eng 64(9):2025–2041CrossRefGoogle Scholar
  6. 6.
    Zia A, Sharma Y, Bettadapura V, Sarin EL, Clements MA, Essa (2015) I Automated assessment of surgical skills using frequency analysis. In: Medical image computing and computer-assisted intervention–MICCAI 2015. Springer, pp 430–438Google Scholar
  7. 7.
    Zia A, Sharma Y, Bettadapura V, Sarin EL, Ploetz T, Clements MA, Essa I (2016) Automated video-based assessment of surgical skills for training and evaluation in medical schools. Int J Comput Assist Radiol Surg 11(9):1623–1636CrossRefPubMedGoogle Scholar
  8. 8.
    Zia A, Sharma Y, Bettadapura V, Sarin EL, Essa I (2017) Video and accelerometer-based motion analysis for automated surgical skills assessment. arXiv preprint arXiv:1702.07772
  9. 9.
    Sharma Y, Bettadapura V, Plötz T, Hammerla N, Mellor S, McNaney R, Olivier P, Deshmukh S, McCaskie A, Essa I (2014) Video based assessment of \(\text{OSATS}\) using sequential motion textures. In: International workshop on modeling and monitoring of computer assisted interventions (M2CAI)-workshopGoogle Scholar
  10. 10.
    Tao L, Elhamifar E, Khudanpur S, Hager GD, Vidal R (2012) Sparse hidden markov models for surgical gesture classification and skill evaluation. In: International conference on information processing in computer-assisted interventions. Springer, Berlin Heidelberg, pp 167–177Google Scholar
  11. 11.
    Laptev I (2005) On space-time interest points. IJCV 64(2–3):107–123CrossRefGoogle Scholar
  12. 12.
    Sharma Y, Bettadapura V, Plötz T, Hammerla N, Mellor S, McNaney R, Olivier P, Deshmukh S, McCaskie A, Essa I (2014) Video based assessment of osats using sequential motion textures. Georgia Institute of Technology, AtlantaGoogle Scholar
  13. 13.
    Bettadapura V, Schindler G, Plötz T, Essa I (2013) Augmenting bag-of-words: data-driven discovery of temporal and structural information for activity recognition. In: CVPR, IEEEGoogle Scholar
  14. 14.
    Pirsiavash H, Vondrick C, Torralba A (2014) Assessing the quality of actions. In: ECCV. Springer, pp 556–571Google Scholar
  15. 15.
    Venkataraman V, Vlachos I, Turaga PK (2015) Dynamical regularity for action analysis. In: BMVC. pp 67–1Google Scholar
  16. 16.
    Nisky I, Che Y, Quek ZF, Weber M, Hsieh MH, Okamura AM (2015) Teleoperated versus open needle driving: Kinematic analysis of experienced surgeons and novice users. In: 2015 IEEE international conference on robotics and automation (ICRA), IEEE pp 5371–5377Google Scholar
  17. 17.
    Ahmidi N, Gao Y, Béjar B, Vedula SS, Khudanpur S, Vidal R, Hager GD (2013) String motif-based description of tool motion for detecting skill and gestures in robotic surgery. In: Medical image computing and computer-assisted intervention–MICCAI 2013. Springer, pp 26–33Google Scholar
  18. 18.
    Fard MJ, Ameri S, Chinnam RB, Pandya AK, Klein MD, Ellis RD (2016) Machine learning approach for skill evaluation in robotic-assisted surgery. arXiv preprint arXiv:1611.05136
  19. 19.
    Ershad M, Koesters Z, Rege R, Majewicz A (2016) Meaningful assessment of surgical expertise: Semantic labeling with data and crowds. In: International conference on medical image computing and computer-assisted intervention. Springer International Publishing, pp 508–515Google Scholar
  20. 20.
    Pincus SM (1991) Approximate entropy as a measure of system complexity. Proc Natl Acad Sci 88(6):2297–2301CrossRefPubMedPubMedCentralGoogle Scholar
  21. 21.
    Drucker H, Burges CJC, Kaufman L, Smola AJ, Vapnik V (1997) Support vector regression machines. In: Jordan MI, Petsche T (eds) Advances in neural information processing systems 9. MIT Press, Cambridge, pp 155–161Google Scholar
  22. 22.
    Gao Y, Vedula SS, Reiley CE, Ahmidi N, Varadarajan B, Lin HC, Tao L, Zappella L, Béjar B, Yuh DD, Chen CCG, Vidal R, Khudanpur S, Hager GD (2014) Jhu-isi gesture and skill assessment working set (jigsaws): a surgical activity dataset for human motion modeling. In: MICCAI Workshop: M2CAI, vol 3Google Scholar

Copyright information

© CARS 2018

Authors and Affiliations

  1. 1.College of ComputingGeorgia Institute of TechnologyAtlantaUSA

Personalised recommendations