Automatic and near real-time stylistic behavior assessment in robotic surgery
- 46 Downloads
Automatic skill evaluation is of great importance in surgical robotic training. Extensive research has been done to evaluate surgical skill, and a variety of quantitative metrics have been proposed. However, these methods primarily use expert selected features which may not capture latent information in movement data. In addition, these features are calculated over the entire task time and are provided to the user after the completion of the task. Thus, these quantitative metrics do not provide users with information on how to modify their movements to improve performance in real time. This study focuses on automatic stylistic behavior recognition that has the potential to be implemented in near real time.
We propose a sparse coding framework for automatic stylistic behavior recognition in short time intervals using only position data from the hands, wrist, elbow, and shoulder. A codebook is built for each stylistic adjective using the positive and negative labels provided for each trial through crowd sourcing. Sparse code coefficients are obtained for short time intervals (0.25 s) in a trial using this codebook. A support vector machine classifier is trained and validated through tenfold cross-validation using the sparse codes from the training set.
The results indicate that the proposed dictionary learning method is able to assess stylistic behavior performance in near real time using user joint position data with improved accuracy compared to using PCA features or raw data.
The possibility to automatically evaluate a trainee’s style of movement in short time intervals could provide the user with online customized feedback and thus improve performance during surgical tasks.
KeywordsSurgical skill assessment Crowdsourcing Robotic surgery
This work was supported by the da Vinci® Standalone Simulator loan program at Intuitive Surgical (PI: Rege) and a clinical research grant from Intuitive Surgical, Inc. (PI: Majewicz Fey).
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest.
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.
Informed consent was obtained from all individual participants included in the study.
- 5.Chen SP, Kirsch S, Zlatev DV, Chang T, Comstock B, Lendvay TS, Liao JC (2016) Optical biopsy of bladder cancer using crowd-sourced assessment. J Am Med Assoc (JAMA) Surg 151(1):90–93Google Scholar
- 6.Darzi A, Mackay S (2001) Assessment of surgical competence. BMJ Qual Saf 10(suppl 2):64–69Google Scholar
- 14.Ershad M, Koesters Z, Rege R, Majewicz A (2016) Meaningful assessment of surgical expertise: semantic labeling with data and crowds. In: International conference on medical image computing and computer-assisted intervention (MICCAI). Springer, pp 508–515Google Scholar
- 15.Ershad M, Rege R, Fey AM (2018a) Meaningful assessment of robotic surgical style using the wisdom of crowds. Int J Comput Assist Radiol Surg: IJCARS, 1–12Google Scholar
- 16.Ershad M, Rege R, Majewicz A (2018b) Surgical skill level assessment using automatic feature extraction methods. In: Medical imaging: image-guided procedures, robotic interventions, and modeling. International Society for Optics and Photonics, p 6Google Scholar
- 22.Hoyer PO (2002) Non-negative sparse coding. In: Proceedings of the 12th IEEE workshop on neural networks for signal processing. IEEE, pp 557–565Google Scholar
- 24.Jin A, Yeung S, Jopling J, Krause J, Azagury D, Milstein A, Fei-Fei L (2018) Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. arXiv preprint arXiv:1802.08774
- 25.Karg M, Jenke R, Seiberl W, Kühnlenz K, Schwirtz A, Buss M (2009) A comparison of pca, kpca and lda for feature extraction to recognize affect in gait kinematics. In: 3rd IEEE international conference on affective computing and intelligent interaction and workshops (ACII). IEEE, pp 1–6Google Scholar
- 26.Kazanzides P, Chen Z, Deguet A, Fischer GS, Taylor RH, DiMaio SP (2014) An open-source research kit for the da vinci® surgical system. In: 2014 IEEE international conference on robotics and automation (ICRA). IEEE, pp 6434–6439Google Scholar
- 29.Law H, Ghani K, Deng J (2017) Surgeon technical skill assessment using computer vision based analysis. In: Machine learning for healthcare conference, pp 88–99Google Scholar
- 31.Maier-Hein L, Vedula SS, Speidel S, Navab N, Kikinis R, Park A, Eisenmann M, Feu-ssner H, Forestier G, Giannarou S, Hashizume M, Katic D, Kenngott H, Kranzfelder M, Malpani A, März K, Neumuth T, Padoy N, Pugh CM, Schoch N, Stoyanov D, Taylor R, Wagner M, Hager GD, Jannin P (2017) Surgical data science for next-generation interventions. Nat Biomed Eng 1:691–696CrossRefGoogle Scholar
- 35.Milovanović I, Popović DB (2012) Principal component analysis of gait kinematics data in acute and chronic stroke patients. Comput Math Methods Med 2012:8Google Scholar
- 36.Nisky I, Hsieh MH, Okamura AM (2013) A framework for analysis of surgeon arm posture variability in robot-assisted surgery. In: IEEE international conference on robotics and automation (ICRA). IEEE, pp 245–251Google Scholar
- 39.Reiley CE, Hager GD (2009) Task versus subtask surgical skill evaluation of robotic minimally invasive surgery. In: International conference on medical image computing and computer-assisted intervention (MICCAI). Springer, pp 435–442Google Scholar
- 40.Reiley CE, Lin HC, Varadarajan B, Vagvolgyi B, Khudanpur S, Yuh DD, Hager GD (2008) Automatic recognition of surgical motions using statistical modeling for capturing variability. Stud Health Technol Inform 132(1):396–401Google Scholar
- 43.Varadarajan B, Reiley C, Lin H, Khudanpur S, Hager G (2009) Data-derived models for segmentation with application to surgical assessment and training. In: International conference on medical image computing and computer-assisted intervention (MICCAI). Springer, pp 426–434Google Scholar