Advertisement

A computer vision technique for automated assessment of surgical performance using surgeons’ console-feed videos

  • Amir Baghdadi
  • Ahmed A. Hussein
  • Youssef Ahmed
  • Lora A. Cavuoto
  • Khurshid A. GuruEmail author
Original Article
  • 110 Downloads

Abstract

Purpose

To develop and validate an automated assessment of surgical performance (AASP) system for objective and computerized assessment of pelvic lymph node dissection (PLND) as an integral part of robot-assisted radical cystectomy (RARC) using console-feed videos recorded during live surgery.

Methods

Video recordings of 20 PLNDs were included. The quality of lymph node clearance was assessed based on the features derived from the computer vision process which include: the number and cleared area of the vessels/nerve (NVs); image median color map; and mean entropy (measures the level of disorganization) in the video frame. The automated scores were compared to the validated pelvic lymphadenectomy appropriateness and completion evaluation (PLACE) scoring rated by a panel of expert surgeons. Logistic regression analysis was employed to compare automated scores versus PLACE scores.

Results

Fourteen procedures were used to develop the AASP algorithm. A logistic regression model was trained and validated using the aforementioned features with 30% holdout cross-validation. The model was tested on the remaining six procedures, and the accuracy of predicting the expert-based PLACE scores was 83.3%.

Conclusions

To our knowledge, this is the first automated surgical skill assessment tool that provides an objective evaluation of surgical performance with high accuracy compared to expert surgeons’ assessment that can be extended to any endoscopic or robotic video-enabled surgical procedure.

Keywords

Computer vision Automated skill evaluation Radical cystectomy Lymph node dissection Quality Lymphadenectomy 

Notes

Acknowledgements

Roswell Park Cancer Institute Alliance Foundation.

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Informed consent

The study included the de-identified videos and was considered by the Institutional Review Board at Roswell Park Comprehensive Cancer Center to be a non-human subject research.

References

  1. 1.
    Goh AC, Goldfarb DW, Sander JC, Miles BJ, Dunkin BJ (2012) Global evaluative assessment of robotic skills: validation of a clinical assessment tool to measure robotic surgical skills. J Urol 187(1):247–252Google Scholar
  2. 2.
    Hussein AA, Dibaj S, Hinata N, Field E, O’leary K, Kuvshinoff B, Mohler JL, Wilding G, Guru KA (2016) Development and validation of a quality assurance score for robot-assisted radical cystectomy: a 10-year analysis. Urology 97:124–129Google Scholar
  3. 3.
    Kozlowski J, Hussein A, Sharif M, Ahmed Y, May P, Fiorica T, Raheem S, Mohler J, Guru K (2017) PD46-11 Utilization of robotic anastomosis competency evaluation (race) for evaluation of surgical competency during urethro-vesical anastomosis. J Urol 197(4):e894–e895Google Scholar
  4. 4.
    Aggarwal R, Moorthy K, Darzi A (2004) Laparoscopic skills training and assessment. Br J Surg 91(12):1549–1558Google Scholar
  5. 5.
    Bilgiç T, Türkşen IB (2000) Measurement of membership functions: theoretical and empirical work. In: Fundamentals of fuzzy sets. Springer, Berlin, pp 195–227Google Scholar
  6. 6.
    Darzi A, Mackay S (2001) Assessment of surgical competence. Qual Saf Health Care 10(Suppl 2):ii64–ii69Google Scholar
  7. 7.
    Hajshirmohammadi I, Payandeh S (2007) Fuzzy set theory for performance evaluation in a surgical simulator. Presence 16(6):603–622Google Scholar
  8. 8.
    Reznick RK, Smee S, Baumber J, Cohen R, Rothman A, Blackmore D, Berard M (1993) Guidelines for estimating the real cost of an objective structured clinical examination. Acad Med 68(7):513–517Google Scholar
  9. 9.
    Riojas M, Feng C, Hamilton A, Rozenblit J (2011) Knowledge elicitation for performance assessment in a computerized surgical training system. Appl Soft Comput 11(4):3697–3708Google Scholar
  10. 10.
    Stylopoulos N, Cotin S, Maithel S, Ottensmeyer M, Jackson P, Bardsley R, Neumann P, Rattner D, Dawson S (2004) Computer-enhanced laparoscopic training system (CELTS): bridging the gap. Surg Endosc Other Interv Tech 18(5):782–789Google Scholar
  11. 11.
    Hussein AA, Ghani KR, Peabody J, Sarle R, Abaza R, Eun D, Hu J, Fumo M, Lane B, Montgomery JS (2017) Development and validation of an objective scoring tool for robot-assisted radical prostatectomy: prostatectomy assessment and competency evaluation. J Urol 197(5):1237–1244Google Scholar
  12. 12.
    Morineau T, Riffaud L, Morandi X, Villain J, Jannin P (2015) Work domain constraints for modelling surgical performance. Int J Comput Assist Radiol Surg 10(10):1589–1597Google Scholar
  13. 13.
    Zia A, Essa I (2018) Automated surgical skill assessment in RMIS training. Int J Comput Assist Radiol Surg 13(5):731–739Google Scholar
  14. 14.
    Datta V, Bann S, Mandalia M, Darzi A (2006) The surgical efficiency score: a feasible, reliable, and valid method of skills assessment. Am J Surg 192(3):372–378Google Scholar
  15. 15.
    Van Hove P, Tuijthof G, Verdaasdonk E, Stassen L, Dankelman J (2010) Objective assessment of technical surgical skills. Br J Surg 97(7):972–987Google Scholar
  16. 16.
    Dubin AK, Julian D, Tanaka A, Mattingly P, Smith R (2018) A model for predicting the GEARS score from virtual reality surgical simulator metrics. Surg Endosc 32(8):3576–3581Google Scholar
  17. 17.
    Raza SJ, Field E, Jay C, Eun D, Fumo M, Hu JC, Lee D, Mehboob Z, Nyquist J, Peabody JO (2015) Surgical competency for urethrovesical anastomosis during robot-assisted radical prostatectomy: development and validation of the robotic anastomosis competency evaluation. Urology 85(1):27–32Google Scholar
  18. 18.
    Raza SJ, Field E, Jay C, Eun D, Fumo M, Hu JC, Lee D, Mehboob Z, Nyquist J, Peabody JO, Sarle R, Stricker H, Yang Z, Wilding G, Mohler JL, Guru KA (2015) Surgical competency for urethrovesical anastomosis during robot-assisted radical prostatectomy: development and validation of the robotic anastomosis competency evaluation. Urology 85(1):27–32.  https://doi.org/10.1016/j.urology.2014.09.017 Google Scholar
  19. 19.
    Hussein AA, Sexton KJ, May PR, Meng MV, Hosseini A, Eun DD, Daneshmand S, Bochner BH, Peabody JO, Abaza R (2018) Development and validation of surgical training tool: cystectomy assessment and surgical evaluation (CASE) for robot-assisted radical cystectomy for men. Surg Endosc pp 1–7Google Scholar
  20. 20.
    Chen C, White L, Kowalewski T, Aggarwal R, Lintott C, Comstock B, Kuksenok K, Aragon C, Holst D, Lendvay T (2014) Crowd-sourced assessment of technical skills: a novel method to evaluate surgical performance. J Surg Res 187(1):65–71Google Scholar
  21. 21.
    Malpani A, Vedula SS, Chen CCG, Hager GD (2015) A study of crowdsourced segment-level surgical skill assessment using pairwise rankings. Int J Comput Assist Radiol Surg 10(9):1435–1447Google Scholar
  22. 22.
    Ganni S, Botden SM, Chmarra M, Goossens RH, Jakimowicz JJ (2018) A software-based tool for video motion tracking in the surgical skills assessment landscape. Surg Endosc 32(6):2994–2999Google Scholar
  23. 23.
    Suzuki T, Egi H, Hattori M, Tokunaga M, Sawada H, Ohdan H (2015) An evaluation of the endoscopic surgical skills assessment using a video analysis software program. Surg Endosc 29(7):1804–1808Google Scholar
  24. 24.
    Ahmidi N, Poddar P, Jones JD, Vedula SS, Ishii L, Hager GD, Ishii M (2015) Automated objective surgical skill assessment in the operating room from unstructured tool motion in septoplasty. Int J Comput Assist Radiol Surg 10(6):981–991Google Scholar
  25. 25.
    Pérez-Escamirosa F, Chousleb-Kalach A, del Carmen Hernández-Baro M, Sánchez-Margallo JA, Lorias-Espinoza D, Minor-Martínez A (2016) Construct validity of a video-tracking system based on orthogonal cameras approach for objective assessment of laparoscopic skills. Int J Comput Assist Radiol Surg 11(12):2283–2293Google Scholar
  26. 26.
    Zia A, Sharma Y, Bettadapura V, Sarin EL, Essa I (2018) Video and accelerometer-based motion analysis for automated surgical skills assessment. Int J Comput Assist Radiol Surg 13(3):443–455Google Scholar
  27. 27.
    Zia A, Sharma Y, Bettadapura V, Sarin EL, Ploetz T, Clements MA, Essa I (2016) Automated video-based assessment of surgical skills for training and evaluation in medical schools. Int J Comput Assist Radiol Surg 11(9):1623–1636Google Scholar
  28. 28.
    Oropesa I, Escamirosa FP, Sánchez-Margallo JA, Enciso S, Rodríguez-Vila B, Martínez AM, Sánchez-Margallo FM, Gómez EJ, Sánchez-González P (2018) Interpretation of motion analysis of laparoscopic instruments based on principal component analysis in box trainer settings. Surg Endosc 32(7):3096–3107Google Scholar
  29. 29.
    Bochner BH, Cho D, Herr HW, Donat M, Kattan MW, Dalbagni G (2004) Prospectively packaged lymph node dissections with radical cystectomy: evaluation of node count variability and node mapping. J Urol 172(4):1286–1290Google Scholar
  30. 30.
    Hellenthal NJ, Hussain A, Andrews PE, Carpentier P, Castle E, Dasgupta P, Kaouk J, Khan S, Kibel A, Kim H (2011) Lymphadenectomy at the time of robot-assisted radical cystectomy: results from the International Robotic Cystectomy Consortium. BJU Int 107(4):642–646Google Scholar
  31. 31.
    Konety BR, Joslyn SA, O’DONNELL MA (2003) Extent of pelvic lymphadenectomy and its impact on outcome in patients diagnosed with bladder cancer: analysis of data from the surveillance, epidemiology and end results program data base. J Urol 169(3):946–950Google Scholar
  32. 32.
    Baghdadi A, Cavuoto L, Hussein AA, Ahmed Y, Guru K (2018) Pd58-04 modeling automated assessment of surgical performance utilizing computer vision: proof of concept. J Urol 199(4):e1134–e1135Google Scholar
  33. 33.
    Otsu N (1979) A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern 9(1):62–66Google Scholar
  34. 34.
    Gupta S, Mazumdar SG (2013) Sobel edge detection algorithm. Int J Comput Sci Manag Res 2(2):1578–1583Google Scholar
  35. 35.
    Duda RO, Hart PE (1972) Use of the Hough transformation to detect lines and curves in pictures. Commun ACM 15(1):11–15Google Scholar
  36. 36.
    Kleinbaum DG, Klein M (2010) Analysis of matched data using logistic regression. In: Logistic regression. Springer, Berlin, pp 389–428Google Scholar
  37. 37.
    Chaudhari A, Kulkarni J (2013) Local entropy based brain MR image segmentation. In: 2013 IEEE 3rd international advance computing conference (IACC), 2013. IEEE, pp 1229–1233Google Scholar
  38. 38.
    Altok M, Achim MF, Matin SF, Pettaway CA, Chapin BF, Davis JW (2018) A decade of robot-assisted radical prostatectomy training: time-based metrics and qualitative grading for fellows and residents. Urol Oncol 1:e13–e25Google Scholar
  39. 39.
    Guzzo TJ, Gonzalgo ML (2009) Robotic surgical training of the urologic oncologist. Urol Oncol 27(2):214–217.  https://doi.org/10.1016/j.urolonc.2008.09.019 Google Scholar
  40. 40.
    Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110Google Scholar
  41. 41.
    Wu H-Y, Rubinstein M, Shih E, Guttag J, Durand F, Freeman W (2012) Eulerian video magnification for revealing subtle changes in the world. ACM Trans Gr 31:1–8Google Scholar

Copyright information

© CARS 2018

Authors and Affiliations

  • Amir Baghdadi
    • 1
    • 2
  • Ahmed A. Hussein
    • 1
  • Youssef Ahmed
    • 1
  • Lora A. Cavuoto
    • 2
  • Khurshid A. Guru
    • 1
    Email author
  1. 1.A.T.L.A.S (Applied Technology Laboratory for Advanced Surgery) Program, Department of UrologyRoswell Park Comprehensive Cancer CenterBuffaloUSA
  2. 2.Department of Industrial and Systems EngineeringUniversity at BuffaloBuffaloUSA

Personalised recommendations