I have been the Fellowship Training Program Director at a Society of Surgical Oncology-approved Fellowship Program for almost 20 years. Recently, I was asked by a few members of the Society of Surgical Oncology if this means that I am still very committed to education or if I just need to be committed. I expect that some of the former fellows who have trained during my reign will gleefully weigh-in with opinions on this matter. The only important point (yes, I can come to the point) is that during nearly 20 years, those of us involved in the education and training of young surgeons have seen many changes.

In this issue of the Annals of Surgical Oncology, there is an article entitled “The Use of a Novel, Web-based Educational Platform Facilitates Intraoperative Training in a Surgical Oncology Fellowship Program.” All surgeons involved in training residents or fellows are expected to assess the technical competence of their trainees. This includes an assessment of their knowledge base (anatomy, disease processes), preparation (information about the specific patient, collateral reading), emotional responses (performance under pressure, interactions with the faculty and operating room staff, maturity and response to criticism), and technical skill (manual dexterity, hand-eye coordination, recognizing tissue planes, maintaining pace and focus during the procedure). The changes in surgical training that we have witnessed during the past two decades have strengthened the notion that surgical educators must provide timely, accurate, and reproducible feedback to trainees to optimize the educational experience. Given the mandated limits on the number of hours that trainees spend in the hospital, it must be recognized that the number of surgical procedures that trainees will perform, and concomitantly, the number of encounters that we as surgical educators have to assess and provide constructive feedback to our trainees has been reduced. We are still charged with ensuring that our trainees are competent, well prepared, and safe when they complete their training period and are released to go forth and provide care to the populace. Therefore, methods and tools to provide useful information on the completeness and adequacy of training has become an increasingly important focus.

The surgical care of patients with vascular disease has undergone radical changes during the past two decades. In fact, the current practice of vascular surgical care would be difficult for those of us who trained in surgical residency more than 20 years ago to recognize at this time. Understanding that these changes have been dramatic and rapid, there have been descriptions of new cognitive and virtual reality training systems to improve performance of trainees doing endovascular procedures.1 Similarly, the development and continuing evolution of laparoscopic equipment and skills makes assessment of competencies in these procedures a critical part of any training program.2 Trainees in many programs spend allotted periods of time using computer-based virtual reality programs and hands-on simulators to practice their laparoscopic skills. Some have even suggested that use of these simulators should be included during the interview process to assess the baseline ability of trainees applying for minimally invasive fellowship slots.3 A significant issue must be considered: many of these virtual reality and simulator-based training programs have not been fully vetted and validated in systematic evaluations to demonstrate improvement in performance of the trainee during actual operations on patients. For example, a recent review of simulation-based training for surgical skills that included the transferability of these skills to the operative setting in patients was published.4 This review found only ten randomized, controlled trials and one nonrandomized comparative study that assessed the utility of simulator training in improving surgical skills during patient operations. The authors of this study concluded that simulation-based training seems to transfer some improved skills to the operative setting, but the results are highly variable. The most important reality that is evident to all of us involved in surgical training is that there is no standardized, validated, and well-accepted methodology to assess improvement in the intraoperative skill set of our trainees.

The manuscript by Roach et al. in this issue of the Annals of Surgical Oncology is an interesting and laudable attempt to create such a methodology. Readers of this manuscript should be forewarned that the statistical considerations and descriptions are relatively complex (unless you spend significant portions of your free time contemplating statistical analyses). Be dauntless and forge ahead because the manuscript is worth your time and consideration. The authors describe a web-based system that allows both the trainees and the surgical faculty to enter an evaluation immediately after an operation (self-evaluation for the trainee, evaluation of the trainee’s preparation, and performance by the faculty member). Approximately 200 operations that are performed by surgical oncologists are included in this program. (I wish the authors had provided a list of those operations; perhaps they can provide such a list on a website related to their training program.) The examples provided indicate that both the trainee and the faculty member can provide a basic or advanced assessment of the specific operation. Based on the data provided in the manuscript—that the average time to complete the evaluation was only 39 seconds—it seems that the trainees and faculty members most commonly chose the more cursory performance evaluation. I would love to know whether the trainees feel that this brief evaluation is sufficient or whether they would have preferred a more detailed analysis (at a cost of more than 39 seconds to the faculty member completing the evaluation).

This manuscript represents an initial assessment of this web-based program. To test and further validate the utility of this program, I believe that it will be necessary to export such a program to other surgical oncology fellowship training centers. I would be very interested to apply this program to our own fellowship group to assess its usefulness. Clearly, it is critical that we provide our trainees with prompt, concise, and constructive evaluation of their performance. As the fellows accrue a greater number of surgical cases, a trend will arise that will allow both the faculty and the trainee to assess whether improvement is occurring. This program has the potential to provide quick and useful feedback to trainees regardless of the grading style of a particular attending. We all recall from our college days (unless you have reached the age where supratentorial cortical atrophy is proceeding at a seemingly exponential pace) that some professors were quite benevolent in their grading approach (an easy A), whereas others were harsh and difficult and caused us to toil long and late hours to “make the grade.” Regardless, with the system designed by the authors, these grading trends by surgical educators will be identified and improvement of the trainee within the individual grading differences of various faculty members can still be ascertained. I applaud the authors for clearly stating that ideally the fellowship program director or, in my opinion, all recently involved members of the surgical faculty should meet face-to-face with the trainee every few weeks to provide verbal feedback regarding performance. The importance of these personal mentoring interactions cannot be forgotten or underemphasized.

I find one particular figure from the manuscript to be interesting. This may say something (or nothing beyond my own obtuse thoughts) about the expectations that we place on ourselves as surgeons, and on our trainees as surgical educators. Figure 8 provides an overall case assessment for one trainee. Both the trainee and the faculty felt that a slow and progressive improvement (better grade) in the performance by the trainee occurred until approximately the 60th case performed by the trainee. At that time, the grading by the faculty hit a plateau with a slight decrease in the grade by the 100th case. Interestingly, the trainee became a bit harsher in his or her self-assessment and showed a trend toward a drop in the grade after the 60th case. After the 100th case, the performance assessment became concordant between the trainee and the faculty, but I am nonetheless fascinated by these trends. It would be interesting to study this with a larger number of trainees and faculty members and a longer period of time to assess the point at which our expectations in continued improvements in performance increase and where we become less patient with ourselves as surgeons and expect more as surgical educators of our trainees.

I highly recommend this manuscript to all who are interested in the training of our current and future generations of surgeons. All of us must seek avidly to develop, test, and improve our educational assessment tools. We will not have as many encounters with our trainees because of the limitation on the number of hours that they will spend in training. We are still expected to recognize and train the best possible surgeons to provide care for our burgeoning population of patients who will be diagnosed with malignant disease. Systems, such as the one developed by Roach and colleagues at the University of Chicago, should be considered and carefully evaluated because all of us should be interested in methods that enhance the timeliness and value of feedback to surgical trainees.