Advertisement

Performance Assessment

  • Timothy M. KowalewskiEmail author
  • Thomas S. Lendvay
Chapter
Part of the Comprehensive Healthcare Simulation book series (CHS)

Abstract

The skill of the surgeon directly impacts patient outcomes (Birkmeyer et al. N Engl J Med 369:1434–1442, 2013). This is intuitive, especially for a highly technical profession, yet our education and maintenance of certification (MOC) processes rarely incorporate standard objective performance assessment. In the education realm, surgical trainee advancement relies heavily on the consolidation of subjective faculty feedback. The Accredited Council of Graduate Medical Education (ACGME) and the Residency Review Committees (RRCs) for individual specialty-specific programs utilize the ACGME Milestones including assessment of six core competencies which encompass technical, cognitive, and communication skills yet remain relatively subjective measures of performance (Nasca et al. New Engl J 366:1051–1056, 2012). MOC algorithms for practicing surgeons still rely heavily on cognitive multiple choice tests and self-paced continuing medical education (CME) endeavors and have not yet embraced procedural skills assessment for concern over logistics and standardization.

In this chapter we will discuss the objective methods available for performance assessment of learning and practicing surgeons. We will first focus on the need for objective skills assessment in our profession followed by some definitions and decomposition of surgical skill. We will then discuss how objective skills metrics are being used today and where we should apply objective assessment methods going forward. We will close with the methods for accomplishing objective assessments. This chapter should provide any educator, instructor, graduate medical education administrator, or professional board member with knowledge on opportunities for objective performance assessment of surgical clinicians.

Keywords

Surgery Performance Assessment Objective skills OSATS CSATS Crowdsourcing Metrics Tracking 

References

  1. 1.
    Batalden P, Leach D, Swing S, Dreyfus H, Dreyfus S. General competencies and accreditation in graduate medical education. Health Aff. 2002;21(5):103.CrossRefGoogle Scholar
  2. 2.
    Liu A, Tendick F, Cleary K, Kaufmann C. A survey of surgical simulation: applications, technology, and education. Presence Teleoperators Virtual Environ. 2003;12(6):599–614.CrossRefGoogle Scholar
  3. 3.
    Satava RM, Cuschieri A, Hamdorf J. Metrics for objective assessment. Surg Endosc. 2003;17(2):220–6.CrossRefPubMedGoogle Scholar
  4. 4.
    Darzi A, Smith S, Taffinder N. Assessing operative skill. BMJ. 1999;318(7188):887–8.CrossRefPubMedPubMedCentralGoogle Scholar
  5. 5.
    Satava RM. The need for metrics in surgical education. Surg Endosc. 1999;13(11):1082.CrossRefPubMedGoogle Scholar
  6. 6.
    Gallagher AG, Satava RM. Virtual reality as a metric for the assessment of laparoscopic psychomotor skills. Surg Endosc. 2002;16(12):1746–52.CrossRefPubMedGoogle Scholar
  7. 7.
    Nasca T, Philibert I, Brigham T. The next GME accreditation system—rationale and benefits. New Engl. J. 2012;366(11):1051–6.CrossRefGoogle Scholar
  8. 8.
    Stanley LE Hamstra J, Yamazaki K, Holmboe ES. Milestones: Annual Report. Accreditation Council for Graduate Medical Education (ACGME), Chicago; 2016.Google Scholar
  9. 9.
    DeMaria EJ, El Chaar M, Rogers AM, Eisenberg D, Kallies KJ, Kothari SN. American Society for Metabolic and Bariatric Surgery position statement on accreditation of bariatric surgery centers endorsed by the Society of American Gastrointestinal and Endoscopic Surgeons. Surg Obes Relat Dis. 2016;12(5):946–54.Google Scholar
  10. 10.
    Tzafestas SG. Medical roboethics. In: Roboethics. Switzerland: Springer; 2016. p. 81–92.Google Scholar
  11. 11.
    Long C, Tsay EL, Jacobo SA, Popat R, Singh K, Chang RT. Factors associated with patient press ganey satisfaction scores for ophthalmology patients. Ophthalmology. Switzerland: Springer; 2016;123(2):242–7.Google Scholar
  12. 12.
    Buyske J. Forks in the road: the assessment of surgeons from the American Board of Surgery Perspective. Surg Clin North Am. 2016;96(1):139–46.CrossRefPubMedGoogle Scholar
  13. 13.
    Bhatt NR, Morris M, O’Neil A, Gillis A, Ridgway PF. When should surgeons retire? Br J Surg. 2016;103(1):35–42.CrossRefPubMedGoogle Scholar
  14. 14.
    Deering SH, Rush RM, Lesperance RN, Roth BJ. Perceived effects of deployments on surgeon and physician skills in the US Army medical department. Am J Surg. 2011;201(5):666–72.CrossRefPubMedGoogle Scholar
  15. 15.
    Lendvay TS, et al. Virtual reality robotic surgery warm-up improves task performance in a dry laboratory environment: a prospective randomized controlled study. J Am Coll Surg. 2013;216(6):1181–92.CrossRefPubMedPubMedCentralGoogle Scholar
  16. 16.
    Levinson W, Roter DL, Mullooly JP, Dull VT, Frankel RM. Physician-patient communication: the relationship with malpractice claims among primary care physicians and surgeons. JAMA. 1997;277(7):553–9.CrossRefPubMedGoogle Scholar
  17. 17.
    Birkmeyer JD, et al. Surgical skill and complication rates after bariatric surgery. N Engl J Med. 2013;369(15):1434–42.CrossRefPubMedGoogle Scholar
  18. 18.
    Miller GE. The assessment of clinical skills/competence/performance. Acad Med J Assoc Am Med Coll. 1990;65(9 Suppl):S63.CrossRefGoogle Scholar
  19. 19.
    Peters J, et al. Development and validation of a comprehensive program of education and assessment of the basic fundamentals of laparoscopic surgery. Surgery. 2004;135(1):21–7.CrossRefPubMedGoogle Scholar
  20. 20.
    Derossis MD, et al. Development of a model for training and evaluation of laparoscopic skills. Am J Surg. 1998;175(6):482–7.CrossRefPubMedGoogle Scholar
  21. 21.
    Derossis AM, Bothwell J, Sigman HH, Fried GM. The effect of practice on performance in a laparoscopic simulator. Surg Endosc. 1998;12(9):1117–20.CrossRefPubMedGoogle Scholar
  22. 22.
    Gallagher AG, et al. Virtual reality simulation for the operating room: proficiency-based training as a paradigm shift in surgical skills training. Ann Surg. 2005;241(2):364.CrossRefPubMedPubMedCentralGoogle Scholar
  23. 23.
    Schmidt RA, Lee TD. Motor control and learning: a behavioral emphasis. Champaign: Human Kinetics Publishers; Switzerland: Springer; 2005.Google Scholar
  24. 24.
    Robles-De-La-Torre G. The importance of the sense of touch in virtual and real environments. Multimedia, IEEE. 2006;13(3):24–30.CrossRefGoogle Scholar
  25. 25.
    Cole J. Pride and a Daily Marathon. A Bradford Book. MIT press. Cambridge, MA 1995.Google Scholar
  26. 26.
    Craig JC, Rollman GB. Somesthesis. Annu Rev Psychol. 1999;50(1):305–31.CrossRefPubMedGoogle Scholar
  27. 27.
    Cole J, Paillard J. Living without touch and peripheral information about body position and movement: studies with deafferented subjects. In:The body and the self. Cambridge, MA: The MIT Press. p. 245–66.Google Scholar
  28. 28.
    Paillard J. Body schema and body image: A double dissociation in deafferented patients. Mot Control Today Tomorrow. 1999;48(3):197–214.Google Scholar
  29. 29.
    Nise NS. Control systems engineering, (With CD). Hoboken:Wiley; Switzerland: Springer; 2007.Google Scholar
  30. 30.
    Eydelman MB, Nguyen T, Green JA. The US Food and Drug Administration’s new regulatory toolkit to bring medical device innovation back to the United States. JAMA Ophthalmol. 2016;134(4):353–4.CrossRefPubMedGoogle Scholar
  31. 31.
    Martin JA, et al. Objective structured assessment of technical skill (OSATS) for surgical residents. Br J Surg. 1997;84(2):273–8.CrossRefPubMedGoogle Scholar
  32. 32.
    Reznick R, Regehr G, MacRae H, Martin J, McCulloch W. Testing technical skill via an innovative ‘bench station’ examination. Am J Surg. 1997;173(3):226–30.CrossRefPubMedGoogle Scholar
  33. 33.
    Vassiliou MC, et al. A global assessment tool for evaluation of intraoperative laparoscopic skills. Am J Surg. 2005;190:107–13.CrossRefPubMedGoogle Scholar
  34. 34.
    Goh AC, Goldfarb DW, Sander JC, Miles BJ, Dunkin BJ. Global evaluative assessment of robotic skills: validation of a clinical assessment tool to measure robotic surgical skills. J Urol. 2012;187(1):247–52.CrossRefPubMedGoogle Scholar
  35. 35.
    Chen C, et al. Crowd-sourced assessment of technical skills: A novel method to evaluate surgical performance. J Surg Res. 2014;187(1):65–71.CrossRefPubMedGoogle Scholar
  36. 36.
    Holst D, et al. Crowd-sourced assessment of technical skills: Differentiating animate surgical skill through the wisdom of crowds. J. Endourol. 2015;29(10):1183–8.CrossRefPubMedGoogle Scholar
  37. 37.
    Aghdasi N, Bly R, White LW, Hannaford B, Moe K, Lendvay TS. Crowd-sourced assessment of surgical skills in cricothyrotomy procedure. J Surg Res. 2015;196(2):302–6.CrossRefPubMedPubMedCentralGoogle Scholar
  38. 38.
    Holst D, et al. Crowd-sourced assessment of technical skills: An adjunct to urology resident surgical simulation training. J Endourol. 2015;29(5):604–9.CrossRefPubMedGoogle Scholar
  39. 39.
    Kirsch S, Comstock B, Warren J, Schaffhausen C, Kowalewski T, Lendvay T. Crowd Sourced Assessment of Technical Skills (CSATS): A Scalable Assessment Tool for the Nursing Workforce. J Invest Med. 2015;63(1):92.Google Scholar
  40. 40.
    Lendvay TS, White L, Kowalewski T. Crowdsourcing to assess surgical skill. JAMA Surg. 2015;150(11):1086–7.CrossRefPubMedGoogle Scholar
  41. 41.
    Deal SB, et al. Crowd-sourced assessment of technical skills: an opportunity for improvement in the assessment of laparoscopic surgical skills. Am J Surg. 2016;211(2):398–404.CrossRefPubMedGoogle Scholar
  42. 42.
    Chen SP, et al. Optical biopsy of bladder Cancer using crowd-sourced assessment. JAMA Surg. 2016;151(1):90–3.CrossRefPubMedGoogle Scholar
  43. 43.
    Kowalewski TM, et al. Crowd-Sourced Assessment of Technical Skills for Validation of Basic Laparoscopic Urologic Skills Tasks. J Urol. 2016;195(6):1859–65.CrossRefPubMedGoogle Scholar
  44. 44.
    Ghani KR, et al. Measuring to improve: peer and crowd-sourced assessments of technical skill with robot-assisted radical prostatectomy. Eur Urol. 2016;69(4):547–50.CrossRefPubMedGoogle Scholar
  45. 45.
    Satava RM. Virtual reality surgical simulator. The first steps. Surg Endosc. 1993;7(3):203.CrossRefPubMedGoogle Scholar
  46. 46.
    Healy GB. The college should be instrumental in adapting simulators to education. Bull Am Coll Surg. 2002;87(11):10.PubMedGoogle Scholar
  47. 47.
    Champion HR, Gallagher AG. Surgical simulation – a ‘good idea whose time has come’. Br J Surg. 2003;90(7):767–8.CrossRefPubMedGoogle Scholar
  48. 48.
    Gallagher AG, Richie K, McClure N, McGuigan J. Objective psychomotor skills assessment of experienced, junior, and novice laparoscopists with virtual reality. World J Surg. 2001;25(11):1478–83.CrossRefPubMedGoogle Scholar
  49. 49.
    Watterson JD, Beiko DT, Kuan JK, Denstedt JD. A randomized prospective blinded study validating Acquisition of Ureteroscopy skills using a computer based virtual reality Endourological simulator. J Urol. 2002;168(5):1928–32.CrossRefPubMedGoogle Scholar
  50. 50.
    Seymour NE, et al. Virtual reality training improves operating room performance: results of a randomized, double-blinded study. Ann Surg. 2002;236(4):458.CrossRefPubMedPubMedCentralGoogle Scholar
  51. 51.
    Sutherland LM, et al. Surgical simulation: a systematic review. Ann Surg. 2006;243(3):291.CrossRefPubMedPubMedCentralGoogle Scholar
  52. 52.
    Fraser SA, Feldman LS, Stanbridge D, Fried GM. Characterizing the learning curve for a basic laparoscopic drill. Surg Endosc. 2005;19(12):1572–8.CrossRefPubMedGoogle Scholar
  53. 53.
    Scott DJ, Ritter EM, Tesfay ST, Pimentel EA, Nagji A, Fried GM. Certification pass rate of 100% for fundamentals of laparoscopic surgery skills after proficiency-based training. Surg Endosc. 2008;22(8):1887–93.CrossRefPubMedGoogle Scholar
  54. 54.
    Fraser SA, Klassen DR, Feldman LS, Ghitulescu GA, Stanbridge D, Fried GM. Evaluating laparoscopic skills: setting the pass/fail score for the MISTELS system. Surg Endosc. 2003;17(6):964–7.CrossRefPubMedGoogle Scholar
  55. 55.
    Keyser EJ, Derossis AM, Antoniuk M, Sigman HH, Fried GM. A simplified simulator for the training and evaluation of laparoscopic skills. Surg Endosc. 2000;14(2):149–53.CrossRefPubMedGoogle Scholar
  56. 56.
    Derossis AM, Antoniuk M, Fried GM. Evaluation of laparoscopic skills: a 2-year follow-up during residency training. Can J Surg. 1999;42(4):293.PubMedPubMedCentralGoogle Scholar
  57. 57.
    Feldman LS, Hagarty SE, Ghitulescu G, Stanbridge D, Fried GM. Relationship between objective assessment of technical skills and subjective in-training evaluations in surgical residents* 1. J Am Coll Surg. 2004;198(1):105–10.CrossRefPubMedGoogle Scholar
  58. 58.
    Feldman LS, Sherman V, Fried GM. Using simulators to assess laparoscopic competence: ready for widespread use? Surgery. 2004;135(1):28.CrossRefPubMedGoogle Scholar
  59. 59.
    Fried GM, Derossis AM, Bothwell J, Sigman HH. Comparison of laparoscopic performance in vivo with performance measured in a laparoscopic simulator. Surg Endosc. 1999;13(11):1077–81.CrossRefPubMedGoogle Scholar
  60. 60.
    Stefanidis D, Sierra R, Korndorffer JR, others. Intensive continuing medical education course training on simulators results in proficiency for laparoscopic suturing. Am J Surg. 2006;191(1):23–7.CrossRefPubMedGoogle Scholar
  61. 61.
    Feldman LS, Cao J, Andalib A, Fraser S, Fried GM. A method to characterize the learning curve for performance of a fundamental laparoscopic simulator task: defining. Surgery. 2009;146(2):381–6.CrossRefPubMedGoogle Scholar
  62. 62.
    Dauster B, et al. Validity of the MISTELS simulator for laparoscopy training in urology. J Endourol. 2005;19(5):541–5.CrossRefPubMedGoogle Scholar
  63. 63.
    Swanstrom LL, Fried GM, Hoffman KI, Soper NJ. Beta test results of a new system assessing competence in laparoscopic surgery. J Am Coll Surg. 2006;202(1):62–9.CrossRefPubMedGoogle Scholar
  64. 64.
    Stefanidis D, Korndorffer JR, others. Proficiency maintenance: impact of ongoing simulator training on laparoscopic skill retention. J Am Coll Surg. 2006;202(4):599–603.CrossRefPubMedGoogle Scholar
  65. 65.
    Fried GM, et al. Proving the value of simulation in laparoscopic surgery. Ann Surg. 2004;240(3):518.CrossRefPubMedPubMedCentralGoogle Scholar
  66. 66.
    Korndorffer JR, others. Simulator training for laparoscopic suturing using performance goals translates to the operating room. J Am Coll Surg. 2005;201(1):23–9.CrossRefPubMedGoogle Scholar
  67. 67.
    Ritter EM, Scott DJ. Design of a proficiency-based skills training curriculum for the fundamentals of laparoscopic surgery. Surg Innov. 2007;14(2):107.CrossRefPubMedGoogle Scholar
  68. 68.
    Castellvi AO, Hollett LA, Minhajuddin A, Hogg DC, Tesfay ST, Scott DJ. Maintaining proficiency after fundamentals of laparoscopic surgery training: a 1-year analysis of skill retention for surgery residents. Surgery. 2009;146(2):387–93.CrossRefPubMedGoogle Scholar
  69. 69.
    Sethi AS, Peine WJ, Mohammadi Y, Sundaram CP. Validation of a novel virtual reality robotic simulator. J Endourol. 2009;23(3):503–8.CrossRefPubMedGoogle Scholar
  70. 70.
    Kenney PA, Wszolek MF, Gould JJ, Libertino JA, Moinzadeh A. Face, content, and construct validity of dV-trainer, a novel virtual reality simulator for robotic surgery. Urology. 2009;73(6):1288–92.CrossRefPubMedGoogle Scholar
  71. 71.
    Lendvay TS, Casale P, Sweet R, Peters C. Initial validation of a virtual-reality robotic simulator. J Robot Surg. 2008;2(3):145–9.CrossRefPubMedGoogle Scholar
  72. 72.
    Rosen J, MacFarlane M, Richards C, Hannaford B, Sinanan M. Surgeon-tool force/torque signatures evaluation of surgical skills in minimally invasive surgery. Med meets virtual reality-the Converg Phys Informational Technol options a New Era Healthc. 1999;62:290–6.Google Scholar
  73. 73.
    Reiley CE, Lin HC, Yuh DD, Hager GD. A review of methods for objective surgical skill evaluation. Surg Endosc. 2011;25(2):356–66.Google Scholar
  74. 74.
    Chmarra MK, Grimbergen CA, Dankelman J. Systems for tracking minimally invasive surgical instruments. Minim Invasive Ther Allied Technol. 2007;16(6):328–40.CrossRefPubMedGoogle Scholar
  75. 75.
    Zia A, Sharma Y, Bettadapura V, Sarin EL, Clements MA, Essa I. Automated assessment of surgical skills using frequency analysis. In: International conference on medical image computing and computer-assisted intervention. Munich, Germany, 5–9 October 2015. p. 430–8.Google Scholar
  76. 76.
    Ahmidi N, Tao L, Sefati S, Gao Y, Lea C, Haro BB, Zappella L, Khudanpur S, Vidal R, Hager GD. A dataset and benchmarks for segmentation and recognition of gestures in robotic surgery. IEEE Trans Biomed Eng. 2017;64(9):2025–41.Google Scholar
  77. 77.
    Rabiner LR. A tutorial on hidden Markov models and selected applications in speech recognition. Proc IEEE. 1989;77(2):257–86.CrossRefGoogle Scholar
  78. 78.
    Inamura T, Tanie H, Nakamura Y. From stochastic motion generation and recognition to geometric symbol development and manipulation. In: International conference on humanoid robots. Karlsruhe Munich, Germany. 2003.Google Scholar
  79. 79.
    Itabashi K, Hirana K, Suzuki T, Okuma S, Fujiwara F. Modelling and realization of the peg-in-hole task based on hidden Markov model. In: Robotics and Automation, 1998. Proceedings. 1998 IEEE International Conference on, 1998, vol. 2. Trieste, Italy. p. 1142–7.Google Scholar
  80. 80.
    Yang J, Xu Y, Chen CS. Human action learning via hidden Markov model. Syst Man Cybern Part A Syst Humans, IEEE Trans. 1997;27(1):34–44.CrossRefGoogle Scholar
  81. 81.
    Hannaford B, Lee P. Hidden Markov model of force torque information in Telemanipulation. Int J Robot Res. 1991;10(5):528–39.CrossRefGoogle Scholar
  82. 82.
    Inamura T, Toshima I, Tanie H, Nakamura Y. Embodied symbol emergence based on mimesis theory. Int J Robot Res. 2004;23(4–5):363–77.CrossRefGoogle Scholar
  83. 83.
    Kowalewski TM, Rosen J, Chang L, Sinanan M, Hannaford B. Optimization of a vector quantization codebook for objective evaluation of surgical skill. In: Proceeding of medicine meets virtual reality 12. Newport, CA. 2004. p. 174–9.Google Scholar
  84. 84.
    Rosen J, Chang L, Brown JD, Hannaford B, Sinanan M, Satava R. Minimally invasive surgery task decomposition – etymology of Endoscopic Suturing. Stud Heal Technol Informatics Med Meets Virtual Real. 2003;94:295–301.Google Scholar
  85. 85.
    Rosen J, Hannaford B, Richards CG, Sinanan MN. Markov modeling of minimally invasive surgery based on tool/tissue interaction and force/torque signatures for evaluating surgical skills. Biomed Eng IEEE Trans. 2001;48(5):579–91.CrossRefGoogle Scholar
  86. 86.
    Rosen J, Solazzo M, Hannaford B, Sinanan M. Task decomposition of laparoscopic surgery for objective evaluation of surgical residents’ learning curve using hidden Markov model. Comput Aided Surg. 2002;7(1):49–61.CrossRefPubMedGoogle Scholar
  87. 87.
    Rosen J, Brown JD, Chang L, Barreca M, Sinanan M, Hannaford B. The {BlueDRAGON}-a system for measuring the kinematics and dynamics of minimally invasive surgical tools in-vivo. Proceedings 2002 IEEE International Conference on Robotics and Automation, vol. 2. Washington, DC. p. 1876–81.Google Scholar
  88. 88.
    Lum M. Kinematic optimization of a 2-{DOF} spherical mechanism for a minimally invasive surgical robot. Masters thesis. University of Washington, Department of Electrical Engineering; Switzerland: Springer; 2004. Accessible at: http://astro.ee.washington.edu/BRL_Pubs/Pdfs/Th029.pdf.
  89. 89.
    Rosen J, Lum M, Trimble D, Hannaford B, Sinanan M. Spherical mechanism analysis of a surgical robot for minimally invasive surgery – analytical and experimental approaches. Stud Heal Technol Informatics Med Meets Virtual Reality. Jan. 2005;111:422–8.Google Scholar
  90. 90.
    Gunther S, Rosen J, Hannaford B, Sinanan M. The {R}ed {DRAGON}: a multi-modality system for simulation and training in minimally invasive surgery. Stud Health Technol Inform. 2007;125:149.PubMedGoogle Scholar
  91. 91.
    Rosen J, Brown JD, Chang L, Sinanan MN, Hannaford B. Generalized approach for modeling minimally invasive surgery as a stochastic process using a discrete markov model. Biomed Eng IEEE Trans. 2006;53(3):399–413.CrossRefGoogle Scholar
  92. 92.
    Kragic D, Marayong P, Li M, Okamura AM, Hager GD. Human-machine collaborative systems for microsurgical applications. Int J Robot Res. 2005;24(9):731–41.CrossRefGoogle Scholar
  93. 93.
    Li M, Okamura AM. Recognition of operator motions for real-time assistance using virtual fixtures. In: Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2003. HAPTICS 2003. Proceedings. 11th Symposium on. 2003, p. 125–31.Google Scholar
  94. 94.
    Lin HC, Shafran I, Yuh D, Hager GD. Towards automatic skill evaluation: detection and segmentation of robot-assisted surgical motions. Comput Aided Surg. 2006;11(5):220–30.CrossRefPubMedGoogle Scholar
  95. 95.
    Megali G, Sinigaglia S, Tonet O, Dario P. Modelling and evaluation of surgical performance using hidden Markov models. Biomed Eng IEEE Trans. 2006;53(10):1911–9.CrossRefGoogle Scholar
  96. 96.
    Dosis A, Bello F, Gillies D, Undre S, Aggarwal R, Darzi A. Laparoscopic task recognition using hidden markov models. Stud Health Technol Inform. 2005;111:115–22.PubMedGoogle Scholar
  97. 97.
    Reiley CE, et al. Automatic recognition of surgical motions using statistical modeling for capturing variability. Stud Health Technol Inform. 2008;132:396.PubMedGoogle Scholar
  98. 98.
    Judkins T, Oleynikov D, Stergiou N. Objective evaluation of expert performance during human robotic surgical procedures. J Robot Surg. 2008;1(4):307–12.CrossRefPubMedPubMedCentralGoogle Scholar
  99. 99.
    Oleynikov D, Judkins TN, Stergiou N. Objective evaluation of expert and novice performance during robotic surgical training tasks. Surg Endosc. 2009;23(3):590–7.CrossRefPubMedGoogle Scholar
  100. 100.
    Narazaki K, Oleynikov D, Stergiou N. Robotic surgery training and performance: identifying objective variables for quantifying the extent of proficiency. Surg Endosc. 2006;20(1):96–103.CrossRefPubMedGoogle Scholar
  101. 101.
    Kowalewski TM, et al. Beyond task time: Automated measurement augments fundamentals of laparoscopic skills methodology. J. Surg. Res. 2014;192(2):329–38.CrossRefPubMedGoogle Scholar
  102. 102.
    Law B, Atkins MS, Kirkpatrick AE, Lomax AJ. Eye gaze patterns differentiate novice and experts in a virtual laparoscopic surgery training environment. In: Proceedings of the Eye tracking research applications symposium on Eye tracking research applications ETRA2004. 2004, vol. 1, no. 212, p. 41–8.Google Scholar
  103. 103.
    MacKenzie CL, Graham ED, Cao CG, Lomax AJ. Virtual hand laboratory meets endoscopic surgery. Stud Health Technol Inform. 1999;62:212–8.Google Scholar
  104. 104.
    Cuschieri A. Visual displays and visual perception in minimal access surgery. Semin Laparosc Surg. 1995;2(3):209–14.Google Scholar
  105. 105.
    Ibbotson JA, MacKenzie CL, Cao CG, Lomax AJ. Gaze patterns in laparoscopic surgery. Stud Health Technol Inform. 1999;62:154–60.Google Scholar
  106. 106.
    Ahmidi N, Hager G, Ishii L, Fichtinger G, Gallia G, Ishii M. Surgical task and skill classification from eye tracking and tool motion in minimally invasive surgery. Med Image Comput Comput Interv. 2010;2010:295–302.Google Scholar
  107. 107.
    Yule S, Flin R, Maran N, Rowley D, Youngson G, Paterson-Brown S. Surgeons’ non-technical skills in the operating room: reliability testing of the NOTSS behavior rating system. World J Surg. 2008;32(4):548–56.CrossRefPubMedGoogle Scholar
  108. 108.
    Marshall SD, Mehra R. The effects of a displayed cognitive aid on non-technical skills in a simulated ‘can’t intubate, can’t oxygenate’ crisis. Anaesthesia. 2014;69(7):669–77.CrossRefPubMedGoogle Scholar
  109. 109.
    Bharathan R, Aggarwal R, Darzi A. Operating room of the future. Best Pract Res Clin Obstet Gynaecol. 2013;27(3):311–22.CrossRefPubMedGoogle Scholar
  110. 110.
    Tao J, Tan T. Affective computing: A review. In: International conference on affective computing and intelligent interaction. 2005. p. 981–95.Google Scholar
  111. 111.
    Picard RW, Picard R. Affective computing, vol. 252. Cambridge: MIT press; 1997.Google Scholar
  112. 112.
    Borish M, Cordar A, Foster A, Kim T, Murphy J, Chaudhary N, Lok B. Utilizing real-time human-assisted virtual humans to increase real-world interaction empathy. KEER 2014 Conference. Linköping, Sweden. June 10–13.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Mechanical EngineeringUniversity of MinnesotaMinneapolisUSA
  2. 2.Department of UrologyUniversity of Washington, Seattle Children’s HospitalSeattleUSA

Personalised recommendations