Meaningful Assessment of Robotic Surgical Style using the Wisdom of Crowds

Original Article

Abstract

Objective

Quantitative assessment of surgical skills is an important aspect of surgical training; however, the proposed metrics are sometimes difficult to interpret and may not capture the stylistic characteristics that define expertise. This study proposes a methodology for evaluating the surgical skill, based on metrics associated with stylistic adjectives, and evaluates the ability of this method to differentiate expertise levels.

Methods

We recruited subjects from different expertise levels to perform training tasks on a surgical simulator. A lexicon of contrasting adjective pairs, based on important skills for robotic surgery, inspired by the global evaluative assessment of robotic skills tool, was developed. To validate the use of stylistic adjectives for surgical skill assessment, posture videos of the subjects performing the task, as well as videos of the task were rated by crowd-workers. Metrics associated with each adjective were found using kinematic and physiological measurements through correlation with the crowd-sourced adjective assignment ratings. To evaluate the chosen metrics’ ability in distinguishing expertise levels, two classifiers were trained and tested using these metrics.

Results

Crowd-assignment ratings for all adjectives were significantly correlated with expertise levels. The results indicate that naive Bayes classifier performs the best, with an accuracy of \(89\pm 12\), \(94\pm 8\), \(95\pm 7\), and \(100\pm 0\%\) when classifying into four, three, and two levels of expertise, respectively.

Conclusion

The proposed method is effective at mapping understandable adjectives of expertise to the stylistic movements and physiological response of trainees.

Keywords

Surgical skill assessment Motion analysis Crowd-sourcing Robotic surgery 

Notes

Acknowledgements

This work was supported by the da Vinci® Standalone Simulator loan program at Intuitive Surgical (PI: Rege), and a clinical research grant from Intuitive Surgical, Inc. (PI: Majewicz Fey)

Compliance with ethical standards

Conflict of interest

The authors declared that they have no conflict of interest.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. The study protocol was approved by both UTD and UTSW IRB offices (UTD #14-57, UTSW #STU 032015-053).

Informed consent

Informed consent was obtained from all individual participants included in the study.

References

  1. 1.
    Morris B (2005) Robotic surgery: applications, limitations, and impact on surgical education. Medscape Gen Med 7(3):72Google Scholar
  2. 2.
    Peters JH, Fried GM, Swanstrom LL, Soper NJ, Sillin LF, Schirmer B, Hoffman K, Committee SF (2004) Development and validation of a comprehensive program of education and assessment of the basic fundamentals of laparoscopic surgery. Surgery 135(1):21–27CrossRefPubMedGoogle Scholar
  3. 3.
    Vassiliou MC, Dunkin BJ, Marks JM, Fried GM (2010) FLS and FES: comprehensive models of training and assessment. Surg Clin N Am 90(3):535–558CrossRefPubMedGoogle Scholar
  4. 4.
    Martin JA, Regehr G, Reznick R, Macrae H, Murnaghan J, Hutchison C, Brown M (1997) Objective structured assessment of technical skill (OSATS) for surgical residents. Br J Surg 84(2):273–278.  https://doi.org/10.1002/bjs.1800840237 CrossRefPubMedGoogle Scholar
  5. 5.
    Vassiliou MC, Feldman LS, Andrew CG, Bergman S, Leffondré K, Stanbridge D, Fried GM (2005) A global assessment tool for evaluation of intraoperative laparoscopic skills. Am J Surg 190(1):107–113.  https://doi.org/10.1016/j.amjsurg.2005.04.004 CrossRefPubMedGoogle Scholar
  6. 6.
    Goh AC, Goldfarb DW, Sander JC, Miles BJ, Dunkin BJ (2012) Global evaluative assessment of robotic skills: validation of a clinical assessment tool to measure robotic surgical skills. J Urol 187(1):247–252.  https://doi.org/10.1016/j.juro.2011.09.032 CrossRefPubMedGoogle Scholar
  7. 7.
    Jain S, Barsness KA, Argall B (2015) Automated and objective assessment of surgical training: detection of procedural steps on videotaped performances. In: International conference on digital image computing: techniques and applications (DICTA), IEEE, pp 1–6Google Scholar
  8. 8.
    Gallagher AG, Ritter EM, Champion H, Higgins G, Fried MP, Moses G, Smith CD, Satava RM (2005) Virtual reality simulation for the operating room: proficiency-based training as a paradigm shift in surgical skills training. Ann Surg 241(2):364–372CrossRefPubMedPubMedCentralGoogle Scholar
  9. 9.
    Howells NR, Brinsden MD, Gill RS, Carr AJ, Rees JL (2008) Motion analysis: a validated method for showing skill levels in arthroscopy. Arthroscopy 24(3):335–342.  https://doi.org/10.1016/j.arthro.2007.08.033 CrossRefPubMedGoogle Scholar
  10. 10.
    Ahmidi N, Hager GD, Ishii L, Fichtinger G, Gallia GL, Ishii M (2010) Surgical task and skill classification from eye tracking and tool motion in minimally invasive surgery. Lect Notes Comput Sci 6363(3):295–302CrossRefGoogle Scholar
  11. 11.
    Chen C, White L, Kowalewski T, Aggarwal R, Lintott C, Comstock B, Kuksenok K, Aragon C, Holst D, Lendvay T (2014) Crowd-sourced assessment of technical skills: a novel method to evaluate surgical performance. J Surg Res 187(1):65–71CrossRefPubMedGoogle Scholar
  12. 12.
    Oropesa I, Sánchez-González P, Lamata P, Chmarra MK, Pagador JB, Sánchez-Margallo JA, Sánchez-Margallo FM, Gómez EJ (2011) Methods and tools for objective assessment of psychomotor skills in laparoscopic surgery. J Surg Res 171(1):e81–e95CrossRefPubMedGoogle Scholar
  13. 13.
    Grober ED, Roberts M, Shin EJ, Mahdi M, Bacal V (2010) Intraoperative assessment of technical skills on live patients using economy of hand motion: establishing learning curves of surgical competence. Am J Surg 199(1):81–85.  https://doi.org/10.1016/j.amjsurg.2009.07.033 CrossRefPubMedGoogle Scholar
  14. 14.
    Lea C, Hager GD, Vidal R (2015) An improved model for segmentation and recognition of fine-grained activities with application to surgical training tasks. In: 2015 IEEE winter conference on applications of computer vision (WACV), pp 1123–1129Google Scholar
  15. 15.
    DiPietro R, Lea C, Malpani A, Ahmidi N, Vedula SS, Lee GI, Lee MR, Hager GD (2016) Recognizing surgical activities with recurrent neural networks. In: International conference on medical image computing and computer-assisted intervention. Springer, New York, pp 551–558Google Scholar
  16. 16.
    White LW, Kowalewski TM, Dockter RL, Comstock B, Hannaford B, Lendvay TS (2015) Crowd-sourced assessment of technical skill: a valid method for discriminating basic robotic surgery skills. J Endourol 29(11):1295–1301CrossRefPubMedGoogle Scholar
  17. 17.
    Kowalewski TM, Comstock B, Sweet R, Schaffhausen C, Menhadji A, Averch T, Box G, Brand T, Ferrandino M, Kaouk J (2016) Crowd-sourced assessment of technical skills for validation of basic laparoscopic urologic skills tasks. J Urol 195(6):1859–1865CrossRefPubMedGoogle Scholar
  18. 18.
    Malpani A, Vedula SS, Chen CCG, Hager GD (2015) A study of crowdsourced segment-level surgical skill assessment using pairwise rankings. Int J Comput Assist Radiol Surg 10(9):1435–1447.  https://doi.org/10.1007/s11548-015-1238-6 CrossRefPubMedGoogle Scholar
  19. 19.
    Chen SP, Kirsch S, Zlatev DV, Chang T, Comstock B, Lendvay TS, Liao JC (2016) Optical biopsy of bladder cancer using crowd-sourced assessment. JAMA Surg 151(1):90–93CrossRefPubMedGoogle Scholar
  20. 20.
    Porte MC, Xeroulis G, Reznick RK, Dubrowski A (2007) Verbal feedback from an expert is more effective than self-accessed feedback about motion efficiency in learning new surgical skills. Am J Surg 193(1):105–110.  https://doi.org/10.1016/j.amjsurg.2006.03.016 CrossRefPubMedGoogle Scholar
  21. 21.
    Ershad M, Koesters Z, Rege R, Majewicz A (2016) Meaningful assessment of surgical expertise: semantic labeling with data and crowds. In: International conference on medical image computing and computer-assisted intervention (MICCAI), Springer, New York, pp 508–515Google Scholar
  22. 22.
    Quigley M, Conley K, Gerkey BP, Faust J, Foote T, Leibs J, Wheeler R, Ng AY (2009) ROS: an open-source robot operating system. In: ICRA workshop on open source softwareGoogle Scholar
  23. 23.
    Moorthy K, Munz Y (2003) Objective assessment of technical skills in surgery. Br Med J 327(7422):1032–1037.  https://doi.org/10.1136/bmj.327.7422.1032
  24. 24.
    Datta V, Mackay S, Mandalia M, Darzi A (2001) The use of electromagnetic motion tracking analysis to objectively measure open surgical skill in the laboratory-based model. J Am Coll Surg 193(5):479–485CrossRefPubMedGoogle Scholar
  25. 25.
    Nisky I, Hsieh MH, Okamura AM (2013) A framework for analysis of surgeon arm posture variability in robot-assisted surgery. In: IEEE international conference on robotics and automation. pp 245–251.  https://doi.org/10.1109/ICRA.2013.6630583
  26. 26.
    Balasubramanian S, Melendez-Calderon A, Roby-Brami A, Burdet E (2015) On the analysis of movement smoothness. J NeuroEng Rehabil 12(1):112.  https://doi.org/10.1186/s12984-015-0090-9 CrossRefPubMedPubMedCentralGoogle Scholar
  27. 27.
    Anderson F, Birch DW, Boulanger P, Bischof WF (2012) Sensor fusion for laparoscopic surgery skill acquisition. J Int Soc Comput Aided Surg 17(6):269–83.  https://doi.org/10.3109/10929088.2012.727641 CrossRefGoogle Scholar
  28. 28.
    Postacchini R, Paoloni M, Carbone S, Fini M, Santilli V, Postacchini F, Mangone M (2015) Kinematic analysis of reaching movements of the upper limb after total or reverse shoulder arthroplasty. J Biomech 48(12):3192–3198.  https://doi.org/10.1016/j.jbiomech.2015.07.002 CrossRefPubMedGoogle Scholar
  29. 29.
    Nisky I, Hsieh MH, Okamura AM (2014) Uncontrolled manifold analysis of arm joint angle variability during robotic teleoperation and freehand movement of surgeons and novices. IEEE Trans Biomed Eng 61(12):2869–2881CrossRefPubMedGoogle Scholar
  30. 30.
    Shenoi BA (2005) Introduction to digital signal processing and filter design. Wiley, New YorkCrossRefGoogle Scholar
  31. 31.
    Halaki M, Ginn KA (2012) Normalization of EMG signals: to normalize or not to normalize and what to normalize to? In: Computational intelligence in electromyography analysis—a perspective on current applications and future challenges, pp 175–194, 40113Google Scholar
  32. 32.
    De Luca CJ, Gilmore LD, Kuznetsov M, Roy SH (2010) Filtering the surface EMG signal: movement artifact and baseline noise contamination. J Biomech 43(8):1573–1579CrossRefPubMedGoogle Scholar
  33. 33.
    Benedek M, Kaernbach C (2010) A continuous measure of phasic electrodermal activity. J Neurosci Methods 190(1):80–91.  https://doi.org/10.1016/j.jneumeth.2010.04.028 CrossRefPubMedPubMedCentralGoogle Scholar

Copyright information

© CARS 2018

Authors and Affiliations

  1. 1.Department of Electrical EngineeringUniversity of Texas at DallasRichardsonUSA
  2. 2.Department of SurgeryUT Southwestern Medical CenterDallasUSA
  3. 3.Department of Mechanical EngineeringUniversity of Texas at DallasRichardsonUSA

Personalised recommendations