Advertisement

Video and accelerometer-based motion analysis for automated surgical skills assessment

  • Aneeq Zia
  • Yachna Sharma
  • Vinay Bettadapura
  • Eric L. Sarin
  • Irfan Essa
Original Article
  • 264 Downloads

Abstract

Purpose

Basic surgical skills of suturing and knot tying are an essential part of medical training. Having an automated system for surgical skills assessment could help save experts time and improve training efficiency. There have been some recent attempts at automated surgical skills assessment using either video analysis or acceleration data. In this paper, we present a novel approach for automated assessment of OSATS-like surgical skills and provide an analysis of different features on multi-modal data (video and accelerometer data).

Methods

We conduct a large study for basic surgical skill assessment on a dataset that contained video and accelerometer data for suturing and knot-tying tasks. We introduce “entropy-based” features—approximate entropy and cross-approximate entropy, which quantify the amount of predictability and regularity of fluctuations in time series data. The proposed features are compared to existing methods of Sequential Motion Texture, Discrete Cosine Transform and Discrete Fourier Transform, for surgical skills assessment.

Results

We report average performance of different features across all applicable OSATS-like criteria for suturing and knot-tying tasks. Our analysis shows that the proposed entropy-based features outperform previous state-of-the-art methods using video data, achieving average classification accuracies of 95.1 and 92.2% for suturing and knot tying, respectively. For accelerometer data, our method performs better for suturing achieving 86.8% average accuracy. We also show that fusion of video and acceleration features can improve overall performance for skill assessment.

Conclusion

Automated surgical skills assessment can be achieved with high accuracy using the proposed entropy features. Such a system can significantly improve the efficiency of surgical training in medical schools and teaching hospitals.

Keywords

Surgical skills assessment Computer vision Machine learning Multi-modal data 

Notes

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.

Informed consent

Informed consent was obtained from all individual participants included in the study.

References

  1. 1.
    Martin J, Regehr G, Reznick R, MacRae H, Murnaghan J, Hutchison C, Brown M (1997) Objective structured assessment of technical skill (OSATS) for surgical residents. Br J Surg 84(2):273–278CrossRefPubMedGoogle Scholar
  2. 2.
    Sharma Y, Bettadapura V, Plötz T, Hammerla N, Mellor S, McNaney R, Olivier P, Deshmukh S, McCaskie A, Essa I (2014) Video based assessment of OSATS using sequential motion textures. In: International workshop on modeling and monitoring of computer assisted interventions (M2CAI)—international conference on medical image computing and computer-assisted intervention—MICCAIGoogle Scholar
  3. 3.
    Zia A, Sharma Y, Bettadapura V, Sarin EL, Ploetz T, Clements MA, Essa I (2016) Automated video-based assessment of surgical skills for training and evaluation in medical schools. Int J Comput Assist Radiol Surg 11(9):1623–1636CrossRefPubMedGoogle Scholar
  4. 4.
    Bettadapura V, Schindler G, Plötz T, Essa I (2013) Augmenting bag-of-words: data-driven discovery of temporal and structural information for activity recognition. In: CVPR. IEEEGoogle Scholar
  5. 5.
    Sharma Y, Plötz T, Hammerla N, Mellor S, Roisin M, Olivier P, Deshmukh S, McCaskie A, Essa I (2014) Automated surgical OSATS prediction from videos. In: ISBI. IEEEGoogle Scholar
  6. 6.
    Zia A, Sharma Y, Bettadapura V, Sarin EL, Clements MA, Essa I (2015) Automated assessment of surgical skills using frequency analysis. In: International conference on medical image computing and computer-assisted intervention–MICCAI 2015. Springer, pp 430–438Google Scholar
  7. 7.
    Trejos A, Patel R, Naish M, Schlachta C (2008) Design of a sensorized instrument for skills assessment and training in minimally invasive surgery. In: 2nd IEEE RAS & EMBS international conference on biomedical robotics and biomechatronics, 2008. BioRob 2008. IEEE, pp 965–970Google Scholar
  8. 8.
    Nisky I, Che Y, Quek ZF, Weber M, Hsieh MH, Okamura AM (2015) Teleoperated versus open needle driving: kinematic analysis of experienced surgeons and novice users. In: 2015 IEEE international conference on robotics and automation (ICRA). IEEE, pp 5371–5377Google Scholar
  9. 9.
    Ershad M, Koesters Z, Rege R, Majewicz A (2016) Meaningful assessment of surgical expertise: semantic labeling with data and crowds. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 508–515Google Scholar
  10. 10.
    Brown J, O’Brien C, Leung S, Dumon K, Lee D, Kuchenbecker K (2016) Using contact forces and robot arm accelerations to automatically rate surgeon skill at peg transfer. IEEE Trans Biomed Eng 64:2263–2275CrossRefPubMedGoogle Scholar
  11. 11.
    Rosen J, Hannaford B, Richards CG, Sinanan MN (2001) Markov modeling of minimally invasive surgery based on tool/tissue interaction and force/torque signatures for evaluating surgical skills. IEEE Trans Biomed Eng 48(5):579–591CrossRefPubMedGoogle Scholar
  12. 12.
    Reiley C, Hager G (2009) Decomposition of robotic surgical tasks: an analysis of subtasks and their correlation to skill. In: International conference on medical image computing and computer-assisted intervention–MICCAIGoogle Scholar
  13. 13.
    Haro BB, Zappella L, Vidal R (2012) Surgical gesture classification from video data. In: International conference on medical image computing and computer-assisted intervention—MICCAI 2012. Springer, pp 34–41Google Scholar
  14. 14.
    Zappella L, Béjar B, Hager G, Vidal R (2013) Surgical gesture classification from video and kinematic data. Med Image Anal 17(7):732–745CrossRefPubMedGoogle Scholar
  15. 15.
    Twinanda AP, Shehata S, Mutter D, Marescaux J, de Mathelin M, Padoy N (2017) Endonet: A deep architecture for recognition tasks on laparoscopic videos. IEEE Trans Med Imaging 36(1):86–97CrossRefPubMedGoogle Scholar
  16. 16.
    DiPietro R, Lea C, Malpani A, Ahmidi N, Vedula SS, Lee GI, Lee MR, Hager GD (2016) Recognizing surgical activities with recurrent neural networks. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 551–558Google Scholar
  17. 17.
    Krishnan S, Garg A, Patil S, Lea C, Hager G, Abbeel P, Goldberg K (2018) Transition state clustering: unsupervised surgical trajectory segmentation for robot learning. In: Robotics research. Springer, pp 91–110Google Scholar
  18. 18.
    Zia A, Zhang C, Xiong X, Jarc AM (2017) Temporal clustering of surgical activities in robot-assisted surgery. Int J Comput Assist Radiol Surg 12(7):1171–1178CrossRefPubMedPubMedCentralGoogle Scholar
  19. 19.
    Goh AC, Goldfarb DW, Sander JC, Miles BJ, Dunkin BJ (2012) Global evaluative assessment of robotic skills: validation of a clinical assessment tool to measure robotic surgical skills. J Urol 187(1):247–252CrossRefPubMedGoogle Scholar
  20. 20.
    Pirsiavash H, Vondrick C, Torralba A (2014) Assessing the quality of actions. In: European conference on computer vision. Springer, pp 556–571Google Scholar
  21. 21.
    Venkataraman V, Vlachos I, Turaga P (2015) Dynamical regularity for action analysis. In: Proceedings of the British machine vision conference (BMVC), pp 67–1Google Scholar
  22. 22.
    Laptev I (2005) On space-time interest points. Int J Comput Vis 64(2–3):107–123CrossRefGoogle Scholar
  23. 23.
    Pudil P, Novovičová J, Kittler J (1994) Floating search methods in feature selection. Pattern Recognit Lett 15(11):1119–1125CrossRefGoogle Scholar
  24. 24.
    Pincus SM (1991) Approximate entropy as a measure of system complexity. Proc Natl Acad Sci 88(6):2297–2301CrossRefPubMedPubMedCentralGoogle Scholar
  25. 25.
    Pincus S, Singer BH (1996) Randomness and degrees of irregularity. Proc Natl Acad Sci 93(5):2083–2088CrossRefPubMedPubMedCentralGoogle Scholar
  26. 26.
    Sloetjes H, Wittenburg P (2008) Annotation by category: ELAN and ISO DCR. In: Language resources and evaluation conference—LRECGoogle Scholar
  27. 27.
    McNemar Q (1947) Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika 12(2):153–157CrossRefPubMedGoogle Scholar
  28. 28.
    Martínez-Zarzuela M, Gómez C, Pernas FJD, Fernández A, Hornero R (2013) Cross-approximate entropy parallel computation on GPUs for biomedical signal analysis. Application to MEG recordings. Comput Methods Programs Biomed 112:189–199CrossRefPubMedGoogle Scholar
  29. 29.
    Gao Y, Vedula SS, Reiley CE, Ahmidi N, Varadarajan B, Lin HC, Tao L, Zappella L, Béjar B, Yuh DD, Chen CCG, Vidal R, Khundanpur S, Hager GD (2014) JHU-ISI gesture and skill assessment working set (JIGSAWS): a surgical activity dataset for human motion modeling. In: International workshop on modeling and monitoring of computer assisted interventions (M2CAI)—international conference on medical image computing and computer-assisted intervention—MICCAI, vol 3Google Scholar

Copyright information

© CARS 2018

Authors and Affiliations

  • Aneeq Zia
    • 1
  • Yachna Sharma
    • 1
  • Vinay Bettadapura
    • 1
  • Eric L. Sarin
    • 2
  • Irfan Essa
    • 1
  1. 1.College of Computing, Georgia TechAtlantaGeorgia
  2. 2.Department of SurgeryEmory UniversityAtlantaGeorgia

Personalised recommendations