Computer-enhanced laparoscopic training system (CELTS): bridging the gap
There is a large and growing gap between the need for better surgical training methodologies and the systems currently available for such training. In an effort to bridge this gap and overcome the disadvantages of the training simulators now in use, we developed the Computer-Enhanced Laparoscopic Training System (CELTS).
CELTS is a computer-based system capable of tracking the motion of laparoscopic instruments and providing feedback about performance in real time. CELTS consists of a mechanical interface, a customizable set of tasks, and an Internet-based software interface. The special cognitive and psychomotor skills a laparoscopic surgeon should master were explicitly defined and transformed into quantitative metrics based on kinematics analysis theory. A single global standardized and task-independent scoring system utilizing a z-score statistic was developed. Validation exercises were performed.
The scoring system clearly revealed a gap between experts and trainees, irrespective of the task performed; none of the trainees obtained a score above the threshold that distinguishes the two groups. Moreover, CELTS provided educational feedback by identifying the key factors that contributed to the overall score. Among the defined metrics, depth perception, smoothness of motion, instrument orientation, and the outcome of the task are major indicators of performance and key parameters that distinguish experts from trainees. Time and path length alone, which are the most commonly used metrics in currently available systems, are not considered good indicators of performance.
CELTS is a novel and standardized skills trainer that combines the advantages of computer simulation with the features of the traditional and popular training boxes. CELTS can easily be used with a wide array of tasks and ensures comparability across different training conditions. This report further shows that a set of appropriate and clinically relevant performance metrics can be defined and a standardized scoring system can be designed.
KeywordsSimulation Laparoscopic skills training Education Virtual reality Laparoscopic surgery
Minimally invasive surgery is a technically demanding discipline requiring unique skills that are not necessary for conventional open surgery. These skills have traditionally been acquired under the apprenticeship model in animal and human subjects. Recent efforts to develop standardized structured training programs in minimally invasive surgery have generally involved the use of training boxes or computer-based virtual reality simulations. However, none of these trainers has been widely accepted and officially integrated into a surgical training curriculum or any other sanctioned training course. Among the impediments to simulator acceptance by organized medicine are the lack of realism and the lack of appropriate performance assessment methodologies. Thus, it is clear that there is a large and growing gap between the need for better training methodologies and the available training systems.
In an effort to bridge this gap and overcome the disadvantages of the currently available training methods, we developed the Computer-Enhanced Laparoscopic Training System (CELTS), as a step toward a more realistic, clinically relevant, and standardized skills trainer. In this report, we describe CELTS and discuss the motivation for developing such a system.
Materials and methods
CELTS consists of a mechanical interface, a customizable set of tasks, a standardized performance assessment methodology, and an Internet-based software interface.
Using synthetic models for each of these tasks provides accurate deformation and force feedback during manipulation, resolving the tissue–instrument force feedback problem associated with virtual reality. For each training task, the system uses a railed locking and alignment mechanism to secure a common task tray to the base (Fig. 2). Once it is locked in place, the operator can proceed with the training exercise without dislodging the task tray from the camera’s field of view. Task trays can be easily and quickly changed. This system provides task designers with a model within which to develop new tasks, as well as creating a common scale among all tasks tested by CELTS.
Performance assessment methodology
To establish an expert performance baseline database for each of the three tasks, a panel of surgeons who are considered to be experts in laparoscopy completed each of the tasks in repetition. Thus, utilizing a z-score statistic (Fig. 3), any subsequent performance by a trainee is compared to an expert performance and assigned a standardized overall score from 0 to 100.
The software interface
A database interface was added to maintain each user’s profile information and record all vital information of their task performance. The database system uses a popular public-domain package called MySQL . With this type of database system, a separate database server process is started on the local machine (or on a remote machine). The CELTS system can establish a secure connection to the database server and then make queries to add or manipulate any records within the approved database. The database contains the trainee’s unique identification number, demographic data, and expertise level. Each time a trainee uses the system, one new record is added to that individual’s database. This new record includes the user identification number, session date and time, task number identifying which task was tested, complete raw tracking measurements, computed metric parameters, and the overall score.
A Web server is created on the system that runs the main application. Thus, the database information is conveniently accessible for review through a secure Web interface (i.e., a dedicated Web site has been designed to give access to the most important parameters available in the database). Once the database has been populated with records at the end of a training session, the trainee and/or instructor can immediately retrieve current or past records from a Web page.
We administered a survey to a panel of 30 expert surgeons attending the 8th annual meeting of the Society of American Gastrointestinal Endoscopic Surgeons (SAGES) as a means of exploring surgeons’ requirements for an “ideal” laparoscopic skills trainer. The experts were also asked to rate the importance of various metrics in assessing performance.
To validate our system, we initiated a two-phase study. The aim of the study was to evaluate the ability of CELTS to discriminate between experts and nonexperts performing the same task. For the initial experiment, we asked three expert surgeons, none of whom were included in the initial expert database, to perform each of the previously listed three tasks repeated 10 times. All of the trials were scored by CELTS. After completing the trials, the surgeons were asked to rate their own performance as “perfect” or “satisfactory”; this enabled us to determine if the scores reported by CELTS correlated with the experts’ subjective evaluation of their own performance. For the second phase, we also asked a group of five novices to perform the same set of tasks. Again, each trial was scored by CELTS. Expert and novice scores were compared to assess if CELTS was able to reliably distinguish between the two.
Our goal was to develop an advanced educational and clinically relevant training system after considering surgeons’ requirements for an “ideal” laparoscopic skills trainer. CELTS is a laparoscopic skills trainer that uses real instruments, a full-color video display, software-based task independent metrics, and a standardized performance assessment methodology. It is a novel computer-enhanced training system that aims to bridge the gap between currently available simulators and needed training methodologies.
The need for better training methodologies has recently been highlighted by many medical organizations with an educational focus. The American Board of Medical Specialties (ABMS) and the Accreditation Council on Graduate Medical Education (ACGME) have initiated a joint outcomes project to identify and quantify the factors that constitute “medical competence” and promote the development of appropriate training models that would improve medical performance and skills acquisition . The Institute of Medicine, in its landmark report “To Err Is Human,” explicitly recommended (Recommendation 8.1) that “hospitals and medical training facilities should adopt proven methods of training such as simulation” . Additionally, the conclusion of the Surgical Simulation Conference, held in Boston in April 2001 and sponsored by the American College of Surgeons, was that “simulators should be used well into the future to teach, refine and test surgical skills” . There is no doubt that, especially in laparoscopic surgery, skills training optimizes the learning experience in the operating room, which is a limited and expensive resource, by increasing the trainee’s familiarity and level of confidence with the fundamentals of laparoscopy.
Although the importance of training has been well established, there is currently no consensus on the best and most effective training methods. Animal models, which are considered the most effective training modality, are not the most realistic and thus are not the preferred method of training, as shown by our survey. The use of training boxes in which rudimentary tools and objects simulate anatomical structures remains the most popular modality. Our survey also confirmed that the medical community is not satisfied with the currently available virtual reality simulators. Computer simulation emerged as a promising tool that might provide new solutions to the limitations of the current training systems. Computer simulation can revolutionize medical education through the quantification of performance and the standardization of training regimens. Computer-assisted simulators can quantify a variety of parameters, such as instrument motion, applied forces, instrument orientation, and dexterity, all of which cannot be measured with non–computer-based training systems. With proper assessment and validation, such systems can provide both initial and ongoing assessment of an operator’s skill. Additionally, a computerized trainer can provide either terminal (post task completion) or concurrent (real-time) feedback during training episodes, thus enhancing skills acquisition. According to Dr. David Leach, the executive director of ACGME, “What we measure we tend to improve” . The implicit challenge in Dr. Leach’s comment is for us to make measurements that are relevant to those skills that require improvement. In the field of surgical simulation, standard measurements have not yet been agreed upon. Currently, most simulators measure time and path length while performing a particular task, but these measures are not considered sufficient indicators of performance, according to our panel of experts.
Until recently, there was a tendency to view performance assessment and metrics in simplistic terms. The non–computer-based laparoscopic training boxes and the first computer-based trainers used only outcome measures to evaluate performance and learning. However, effective metrics should not only provide outcome information, but should also evaluate the key factors that affect performance. Currently available training systems lack a standardized performance assessment methodology, which is an essential component of a successful educational tool. CELTS is the first laparoscopic skills trainer that incorporates a standardized set of five metrics, each of which measures a specific skill that should be mastered by the laparoscopic surgeon in training. A trainee’s performance is compared to the performance of an expert surgeon. After each training session, CELTS reports not only the trainee’s evaluation, but also the scores of expert surgeons performing the same task. Thus, the feedback system of CELTS serves as a virtual instructor, eliminating the need for the physical presence of an instructor during each training session. Additionally, the flexible Web-based interface provides both instructors and trainees with remote access, further facilitating the educational process.
Another major issue with the simulators is the requisite level of realism. Surgeons believe that the ideal trainer is one that reproduces real operating conditions and teaches tangible operative skills. However, current virtual reality systems cannot provide “real-world” authenticity. Although it has been shown that practicing on simple abstract tasks can lead to skills acquisition , surgeons historically have never used abstract tasks for their training. This may explain in part why the currently available computer-based skills trainers are not completely accepted by the surgical community. It is clear that without an objective, standardized, and clinically meaningful feedback system, the simplistic and abstract tasks used in the majority of available training systems are not sufficient to learn the subtleties of delicate laparoscopic tasks and manipulation, such as suturing.
There are other fundamental issues that cannot be ignored. The most important of these are force feedback and visual feedback. While force feedback is diminished in laparoscopic manipulations, surgeons adapt to this inherent disadvantage by developing clever psychological adaptation mechanisms and special perceptual and motor skills. Conscious-inhibition (gentleness) is considered one of the major adaptation mechanisms. Conscious-inhibition means that surgeons use visual feedback cues to sense applied force, despite a lack of actual force feedback. We have called this adaptive transformation “visual haptics.” Using “visual haptics,” a surgeon is able to appropriately modify the amount of mechanical force applied to tissues, predominantly based on the input of visual cues. The visual cues are primarily tissue deformations. For example, a surgeon may not be able to feel a structure that is stretched when retracted, but he or she may sense the retraction by noting subtle indicators such as color change, alteration of contour, and adjacent tissue integrity on the monitor. Although force feedback is a requirement for the ideal trainer, the introduction of force feedback in computer-based learning systems is difficult and requires the knowledge of two elements: instrument–tissue interaction (computation of forces that are applied during surgical manipulations) and human–instrument interaction (design and development of an interface). These are active research areas, and efficient and cost-effective solutions remain to be found. However, the importance of realistic visual feedback that depicts tissue deformations accurately cannot be overstated. The creation of virtual deformable objects is a cumbersome process that requires developing a mathematical model and a knowledge of the object’s behavior with different types of manipulation. Given the need for accurate visual feedback and the limitations of current technology, we believe that the simplest and most cost-effective solution is using real laparoscopic instruments and cameras as well as synthetic task pads, as we have done in CELTS.
In conclusion, we have developed a novel computer-enhanced laparoscopic skills trainer that combines the advantages of computer simulation with those of the traditional and popular training boxes. We also defined a set of appropriate and clinically relevant performance metrics and created a standardized scoring system that compares a trainee’s performance to that of an expert. The initial proof of concept studies have demonstrated the validity of this novel approach and further studies are in progress.
- 1.Accreditation Council for Graduate Medical Education (ACGME) Outcome Project. Available online at: http://www.acgme.org/outcome/
- 2.Cotin, S, Stylopoulos, N, Ottensmeyer, M, Neumann, P, Rattner, D, Dawson, S 2002
Metrics for laparoscopic skills trainers: the weakest link!Dohi, TKikinis, R eds. Lecture notes in computer science; vol 2488.Springer-VerlagBerlin3543Google Scholar
- 4.Healy GB, Shore A, Meglan D, Russel M, Satava R (2002) The use of simulators in surgical education. Final report to the Board of Regents of the American College of Surgeons. Working Conference, Boston College, Chestnut Hill, MA, USA, 19–21 April 2002Google Scholar
- 5.Kohn, LTCorrigan, JMDonaldson, MF eds. 1999To err is human: building a safer health system.Institute of Medicine, National Academy PressWashington (DC)Google Scholar
- 6.Metrics for Objective Assessment of Surgical Skills Workshop, Scottsdale, AZ, USA, 9–10 July 2001. Final report. Available online at: http://www.tatrc.org
- 7.MySQL database. Available online at: http://www.mysql.com
- 10.United States Surgical Corporation (USSC) Surgical Skills Training Center2000Report on the analysis and validation of the FLS examination.SAGES: Fundamentals of Laparoscopic Surgery (FLS) Project. Society of American Gastrointestinal Endoscopic SurgeonsSanta Monica (CA)120Google Scholar