Skip to main content
Log in

Crowdsourced versus expert evaluations of the vesico-urethral anastomosis in the robotic radical prostatectomy: is one superior at discriminating differences in automated performance metrics?

  • Original Article
  • Published:
Journal of Robotic Surgery Aims and scope Submit manuscript

Abstract

Crowdsourcing from the general population is an efficient, inexpensive method of surgical performance evaluation. In this study, we compared the discriminatory ability of experts and crowdsourced evaluators (the Crowd) to detect differences in robotic automated performance metrics (APMs). APMs (instrument motion tracking and events data directly from the robot system) of anterior vesico-urethral anastomoses (VUAs) of robotic radical prostatectomies were captured by the dVLogger (Intuitive Surgical). Crowdsourced evaluators and four expert surgeons evaluated video footage using the Global Evaluative Assessment of Robotic Skills (GEARS) (individual domains and total score). Cases were then stratified into performance groups (high versus low quality) for each evaluator based on GEARS. APMs from each group were compared using the Mann–Whitney U test. 25 VUAs performed by 11 surgeons were evaluated. The Crowd displayed moderate correlation with averaged expert scores for all GEARS domains (r > 0.58, p < 0.01). Bland–Altman analysis showed a narrower total GEARS score distribution by the Crowd compared to experts. APMs compared amongst performance groups for each evaluator showed that through GEARS scoring, the most common differentiated metric by evaluators was the velocity of the dominant instrument arm. The Crowd outperformed two out of four expert evaluators by discriminating differences in three APMs using total GEARS scores. The Crowd assigns a narrower range of GEARS scores compared to experts but maintains overall agreement with experts. The discriminatory ability of the Crowd at discerning differences in robotic movements (via APMs) through GEARS scoring is quite refined, rivaling that of expert evaluators.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Lanfranco AR, Castellanos AE, Desai JP, Meyers WC (2004) Robotic surgery. Ann Surg 239(1):14–21. https://doi.org/10.1097/01.sla.0000103020.19595.7d

    Article  PubMed  PubMed Central  Google Scholar 

  2. Birkmeyer JD, Finks JF, O’Reilly A et al (2013) Surgical skill and complication rates after bariatric surgery. N Engl J Med 369(15):1434–1442. https://doi.org/10.1056/NEJMsa1300625

    Article  CAS  PubMed  Google Scholar 

  3. Reznick R, MacRae H (2006) Teaching surgical skills—changes in the wind. N Engl J Med 355(25):2664–2669. https://doi.org/10.1021/jf052913o

    Article  CAS  PubMed  Google Scholar 

  4. Hogle NJ, Liu Y, Ogden RT, Fowler DL (2014) Evaluation of surgical fellows’ laparoscopic performance using Global Operative Assessment of Laparoscopic Skills (GOALS). Surg Endosc Other Interv Tech 28(4):1284–1290. https://doi.org/10.1007/s00464-013-3324-6

    Article  Google Scholar 

  5. Aghazadeh MA, Jayaratna IS, Hung AJ et al (2015) External validation of Global Evaluative Assessment of Robotic Skills (GEARS). Surg Endosc Other Interv Tech 29(11):3261–3266. https://doi.org/10.1007/s00464-015-4070-8

    Article  Google Scholar 

  6. Moorthy K, Munz Y (2003) Objective assessment of technical skills in surgery. Br Med J 327(7422):1032–1037. https://doi.org/10.1136/bmj.327.7422.1032

    Article  Google Scholar 

  7. Darzi A, Smith S, Taffinder N (1999) Assessing operative skill: Needs to become more objective. Bmj 318:887–888. https://doi.org/10.1136/bmj.318.7188.887

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  8. Buhrmester M, Kwang T, Gosling SD (2011) Amazon’ s mechanical turk: a new source of inexpensive, yet high-quality, data ? Perspect Psychol Sci 6(1):3–5. https://doi.org/10.1177/1745691610393980

    Article  PubMed  Google Scholar 

  9. Vernez SL, Huynh V, Osann K, Okhunov Z, Landman J, Clayman RV (2017) C-SATS: assessing surgical skills among urology residency applicants. J Endourol 31(S1):S-95–S-100. https://doi.org/10.1089/end.2016.0569

    Article  Google Scholar 

  10. Ghani KR, Miller DC, Linsell S et al (2016) Measuring to improve: peer and crowd-sourced assessments of technical skill with robot-assisted radical prostatectomy. Eur Urol 69(4):547–550. https://doi.org/10.1016/j.eururo.2015.11.028

    Article  PubMed  Google Scholar 

  11. Lendvay TS, White L, Kowalewski T (2015) Crowdsourcing to assess surgical skill. JAMA Surg 150(11):1086–1087. https://doi.org/10.1001/jamasurg.2015.2405

    Article  Google Scholar 

  12. Wang X, Mudie L, Brady CJ (2016) Crowdsourcing: an overview and applications to ophthalmology. Curr Opin Ophthalmol 27(3):256–261. https://doi.org/10.1097/ICU.0000000000000251

    Article  PubMed  PubMed Central  Google Scholar 

  13. Mitry D, Peto T, Hayat S, Blows P, Morgan J, Khaw K-T et al (2015) Crowdsourcing as a screening tool to detect clinical features of glaucomatous optic neuropathy from digital photography. PLoS One 10(2):e0117401. https://doi.org/10.1371/journal.pone.0117401

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  14. Liebeskind DS (2016) Crowdsourcing precision cerebrovascular health: imaging and cloud seeding a Million Brains Initiative™. Front Med 3:62. https://doi.org/10.3389/fmed.2016.00062

    Article  Google Scholar 

  15. Mavandadi S, Dimitrov S, Feng S, Yu F, Sikora U, Yaglidere O et al (2012) Distributed medical image analysis and diagnosis through crowd-sourced games: a malaria case study. PLoS One 7(5):e37245. https://doi.org/10.1371/journal.pone.0037245

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  16. Holst D, Kowalewski TM, White LW et al (2014) Crowd-sourced assessment of technical skills: an adjunct to urology resident surgical simulation training. J Endourol 29(5):604–609. https://doi.org/10.1089/end.2014.0616

    Article  Google Scholar 

  17. Chen C, White L, Kowalewski T et al (2014) Crowd-sourced assessment of technical skills: a novel method to evaluate surgical performance. J Surg Res 187(1):65–71. https://doi.org/10.1016/j.jss.2013.09.024

    Article  PubMed  Google Scholar 

  18. Powers MK, Boonjindasup A, Pinsky M et al (2016) Crowdsourcing assessment of surgeon dissection of renal artery and vein during robotic partial nephrectomy: a novel approach for quantitative assessment of surgical performance. J Endourol 30(4):447–452. https://doi.org/10.1089/end.2015.0665

    Article  PubMed  Google Scholar 

  19. Aghdasi N, Bly R, White LW, Hannaford B, Moe K, Lendvay TS (2015) Crowd-sourced assessment of surgical skills in cricothyrotomy procedure. J Surg Res 196(2):302–306. https://doi.org/10.1016/j.jss.2015.03.018

    Article  PubMed  PubMed Central  Google Scholar 

  20. Ghani KR, Comstock B, Miller DC et al (2017) Technical skill assessment of surgeons performing robot-assisted radical prostatectomy: relationship between crowdsourced review and patient outcomes. J Urol 197(4):e609. https://doi.org/10.1016/j.juro.2017.02.3221

    Article  Google Scholar 

  21. Polin MR, Siddiqui NY, Comstock BA et al (2016) Crowdsourcing: a valid alternative to expert evaluation of robotic surgery skills. Am J Obstet Gynecol 215(5):644.e1–644.e7. https://doi.org/10.1016/j.ajog.2016.06.033

    Article  Google Scholar 

  22. White LW, Kowalewski TM, Dockter RL, Comstock B, Hannaford B, Lendvay TS (2015) Crowd-sourced assessment of technical skill: a valid method for discriminating basic robotic surgery skills. J Endourol 29(11):1295–1301. https://doi.org/10.1089/end.2015.0191

    Article  PubMed  Google Scholar 

  23. Hung AJ, Chen J, Jarc A, Hatcher D, Djaladat H, Gill IS (2017) Development and validation of objective performance metrics for robot-assisted radical prostatectomy—a pilot study. J Urol 199(1):296–304. https://doi.org/10.1016/j.juro.2017.07.081

    Article  PubMed  Google Scholar 

  24. Goh AC, Goldfarb DW, Sander JC, Miles BJ, Dunkin BJ (2012) Global evaluative assessment of robotic skills: validation of a clinical assessment tool to measure robotic surgical skills. J Urol 187(1):247–252. https://doi.org/10.1016/j.juro.2011.09.032

    Article  PubMed  Google Scholar 

  25. Katz AJ (2016) The role of crowdsourcing in assessing surgical skills. Surg Laparosc Endosc Percutan Tech 26(4):271–277. https://doi.org/10.1097/SLE.0000000000000293

    Article  PubMed  Google Scholar 

  26. Gomez ED, Aggarwal R, McMahan W, Bark K, Kuchenbecker KJ (2016) Objective assessment of robotic surgical skill using instrument contact vibrations. Surg Endosc Other Interv Tech 30(4):1419–1431. https://doi.org/10.1007/s00464-015-4346-z

    Article  Google Scholar 

  27. Dubin AK, Smith R, Julian D, Tanaka A, Mattingly P (2017) A comparison of robotic simulation performance on basic virtual reality skills: simulator subjective versus objective assessment tools. J Minim Invasive Gynecol. https://doi.org/10.1016/j.jmig.2017.07.019

    Article  PubMed  Google Scholar 

  28. Holst D, Kowalewski TM, White LW et al (2015) Crowd-sourced assessment of technical skills: differentiating animate surgical skill through the wisdom of crowds. J Endourol 29(10):1183–1188. https://doi.org/10.1089/end.2015.0104

    Article  PubMed  Google Scholar 

  29. Raza SJ, Field E, Jay C et al (2015) Surgical competency for urethrovesical anastomosis during robot-assisted radical prostatectomy: development and validation of the robotic anastomosis competency evaluation. Urology 85(1):27–32. https://doi.org/10.1016/j.urology.2014.09.017

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

Anthony Jarc and Liheng Guo at Intuitive Surgical for providing and assisting with dVLoggers.

Author information

Authors and Affiliations

Authors

Contributions

PJO: data management, data analysis, and manuscript writing. JC: project development, data collection, and manuscript editing. DH: data collection. HD: data collection and manuscript editing. AJH: project development, data collection, and manuscript editing.

Corresponding author

Correspondence to Andrew J. Hung.

Ethics declarations

Conflict of interest

Paul J. Oh B.S. declares no conflict of interest. Jian Chen M.D. declares no conflict of interest. David Hatcher M.D. declares no conflict of interest. Hooman Djaladat M.D. declares no conflict of interest. Andrew J. Hung is a consultant for Ethicon, Inc., and receives clinical research funding from Intuitive Surgical.

Ethical standards

Ethical research standards were met, and informed consent was obtained from all individual participants included in the study.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (XLSX 24 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Oh, P.J., Chen, J., Hatcher, D. et al. Crowdsourced versus expert evaluations of the vesico-urethral anastomosis in the robotic radical prostatectomy: is one superior at discriminating differences in automated performance metrics?. J Robotic Surg 12, 705–711 (2018). https://doi.org/10.1007/s11701-018-0814-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11701-018-0814-5

Keywords

Navigation