Surgical Endoscopy

, Volume 21, Issue 6, pp 1012–1016 | Cite as

Developing a multimedia environment for customized teaching of an adrenalectomy

  • Juan Cendan
  • Minho Kim
  • Sergei Kurenov
  • Jorg Peters
New Technology

Abstract

We have developed a computer based simulation process which allows a surgical expert to create a customized operative environment. This virtual environment, the Toolkit for Illustration of Procedures in Surgery (3D TIPS), is deployed on a low-cost computer system and requires minimal training for the programmer. The learner can be engaged in training immediately and the educator can modify the system and annotate the procedure to highlight specific points using video clips, operative images, and the like. A laparoscopic adrenalectomy is presented as a proof of concept in the accompanying article.

Keywords

Education, surgical Technical, general Technical, imaging and VR Technical, training courses 

Medical illustrations and, more recently, videos have become the standard for the dissemination, documentation, and teaching of surgical procedures. Presently, the hurdles for taking descriptive text to illustration demand long hours from a trained medical illustrator and the physician developing the presentation. The illustrator generates images that are subsequently revised before results are satisfactory. In cases in which an accurate depiction is required, a three-dimensional or multiple-view representation may be necessary to provide satisfactory transfer of information. These paperbound reproductions are generally limited in the ability to illustrate the nuances of relative size, texture, or position. The process is only partly in the surgeon’s control and can take years in the case of textbooks to reach the end user—senior residents or novice surgeons. The system is intended to illustrate anatomic relationships and surgical procedures. Anatomists, surgical residents, or practicing surgeons could take advantage of our system.

Alternatively, in the realm of laparoscopy, the procedure can be captured on video and then edited to optimize the viewer’s time and disk memory limitations. This situation is limited by the anatomic realities of the patient being recorded, a procedure that also requires the consent of the patient. Furthermore, only passive viewing is involved in the current learning process. In this article we describe the system and demonstrate a proof of concept by illustrating a laparoscopic adrenalectomy with the three-dimensional Toolkit for Illustration of Procedures in Surgery (3D TIPS).

Description of the optimal system

After word processors became commonplace, documents were entered and edited completely by the author. Along with the Internet, this capability increased business communications several orders of magnitude and currently facilitates the bulk of the publishing market.

We have developed the 3D TIPS system that facilitates the documentation of a surgical procedure on a surgeon’s desktop computer. The system allows for immediate procedural capture, annotation, and active replay by the learner. This has become possible because of our recent ability to exploit graphics hardware for haptic authoring [1].

The development goal is for the surgeon to be able to document critical procedural relationships with the click of the computer mouse or a similarly easy-to-use input device. In our system we have opted to record the procedural and tactile details [2] of the surgical procedure through a feedback stylus guided by the surgeon (Fig. 1). Once recorded, the procedure can be edited and augmented with additional images and text. The ultimate product is a robust instructional document that students can use immediately. The haptic stylus can then be used to guide the student by retracing the surgery in the virtual environment. At any point in the procedure, the operator can pause the process and explore the environment, continuing or repeating the process until the skills are honed.
Fig. 1.

The author is shown manipulating a haptic stylus to customize the 3D anatomical virtual environment. The author can select organs, vessels, and tissues from a preselected list and arrange them in any desired manner using an interface that is immediately recognizable.

Description of the system

The surgeon first constructs and details a related group of organs, including vessels and overlying soft tissues. These are edited individually for haptic and visual characteristics. The system allows the surgeon to easily program and display any anatomic relationship in 3D space. For example, if the author wants to demonstrate an adrenalectomy with three large veins draining into the vena cava, this can be done using a point-and-click mechanism to add and alter the anatomic components. The anatomy is selected from a menu of options that include the standard solid organs and vessels that can then be modified according to individual haptic and visual variables. These variables include tactile properties of vessels such as elasticity, color, and size. No other simulator allows the user to define the anatomic environment with this immediacy.

Using a standard computer interface augmented by a haptic stylus, the 3D figures are selected from a superset of predefined templates (Fig. 2). Included in the selection are color and tactile parameters that can be customized for the individual tissue (Fig. 3). Surgeons who want to highlight specifics can edit organ texture as simply as underlining text in a word processor.
Fig. 2.

Screenshot of the predefined anatomical options that the author can use to populate the 3D virtual scene.

Fig. 3.

Once the author has chosen an organ for inclusion in the scene, he can then customize the tissue with respect to color and relative stiffness. In the bubble we demonstrate the magnified view of deflection of the right adrenal gland using the haptic device.

Perspectives and magnifications can be changed with the click of a mouse; soft tissue can be changed to illustrate pathologic differences; colors can be enhanced to highlight structure; and the entire environment can be saved and transferred over the Internet. Rather than one or two perspectives, an infinite number of perspectives in a surgical environment are available by rotating the stylus or clicking the mouse (Figs. 2, 3, and 4).
Fig. 4.

The anatomic construct of the adrenal vascular supply can be immediately customized to accommodate the required teaching point(s). For example, additional or aberrant vessels can be easily displayed.

The next step is to record the procedure. This phase represents, in fact, the recording of the procedure in the virtual surgical environment (Fig. 5). Documentation of a complicated surgery or illustrating nuances in anatomic relationships can be achieved, and the surgeon decides what is relevant. The recorded procedure can then be annotated using the author’s own operative pictures, videos, or notes by downloading these images into the TIPS software package. These can be deployed at the proper time to give the user additional critical information. The annotations would be primed to present themselves as the relevant portion of the experience is approached. This dynamic feature of the system allows for a greater understanding of a surgical procedure than simply reading about it in a textbook. Subtle placement of hands and instrument can be fully explored. Finally, this type of communication allows for a novel approach to surgical education.
Fig. 5.

The image can be covered with a fatty deposit. The educator and the learner will “dissect” through this tactile layer to reach and uncover the underlying significant anatomy. You can also see in the small image square an actual operative photograph of the right adrenal vein to give additional value to the interaction.

Communication between attending surgeons and residents about “what if” scenarios can be fully explored before surgery by programming anatomic variables. For example, the attending surgeon can construct a biliary anatomic variation, cover it with fatty tissue and record the dissection such that the resident can experience the dissection in this safe virtual environment.

The final step involves the learner. The student can access the system and immediately be guided by the stylus through the procedure as many times as the student wishes. Although not yet implemented, the system will be able to keep track of errors, time to completion, and other metrics as indicators of competency.

The current system can be installed on a laptop or desktop. Anatomical entry is designed to be taken from existing templates and databases and by downloading images from either one of several Internet anatomic reconstruction websites or specific patient images. There is no need to construct an image de novo; in fact that process would detract from the accessibility of this platform. The only additional equipment needed would be the operative stylus. We use the PHANTOM® Omni™ Haptic Device from SensAble Technologies which costs about $1500-$2000. The computer software is distributed as open-source software (no cost). The greatest time commitment would be that of the author, i.e., the person who wants to display a particular procedure.

Proof of concept: illustrating an adrenalectomy

In this scenario the surgeon wants to demonstrate the local anatomy of the right adrenal gland. The surgeon can choose the gland and the relevant vascular structures (vena cava and adrenal vein) from a menu of options (Figs. 1 and 2). These are arranged as the surgeon sees fit; for example, several draining veins can be illustrated. Once the structural components are chosen, these can be given different colors and haptic attributes (Figs. 3 and 4). They are then covered with fatty tissue which is applied like a “spray gun” (Fig. 5). When the anatomic configuration is complete, the surgeon can then record the necessary motions required to perform the virtual operation. In this case, the surgeon can displace the fatty tissue to evaluate the anatomic and venous relationships. Next, the surgeon can deflect and probe the tissue to further analyze their relationships. The steps of the dissection are recorded. Finally, the recording can be annotated for the learner. The insertion of drawings, operative pictures, and notes is simple. These are uploaded into windows that appear at the appropriate time of the surgical sequence.

Once the procedure is recorded and annotated, the learner can use the system. The learner is “pulled” by the haptic device through the steps of the procedure, and the annotations present themselves at the correct time. The learner can also pause the system and “palpate” the structures.

Results

Surgeons from the Department of Surgery at the University of Florida have evaluated the kit by authoring procedures and answering a questionnaire (Appendix). Based on their suggestions and feedback, we have come up with a number of conclusions and future directions.

We have received feedback that the interface is natural and easy to learn and use, that the system would be immediately useful, and that it could be used to convey data that is otherwise difficult to communicate. With respect to the reality of the experience, users have noted the following: Purely visual feedback was unsatisfactory in providing a feeling of responsive materials. Conversely, purely haptic feedback, without visual confirmation, was equally unsatisfactory. It was our observation that in this case users tended to immediately exert maximal force on the elastic tissue. The perception of elasticity of the tissue, however, improved dramatically by combining haptic and visual channels. Once the visual clues indicated contact, the user’s sensory focus switched from the initial visual focus and force was exerted more delicately. Synchronization of visual and haptic cues is crucial and any lack of resolution, time lag, or inconsistency between the two senses must be avoided.

Our surgeons have recommended that the following additional improvements be implemented: The addition of organs to the 3D scene by clicking the organ image rather than highlighted text. The buttons on the mouse interface need to be reversed. The parameters panel needs to be modified to simplify interaction. The system has not yet been tested with residents, nor have we been able to develop a cogent mechanism to test the transmission of learning from this environment to the actual operative theater. Future experiments will need to be developed to demonstrate learning.

Conclusions

Immediately deployable surgical and anatomic scenarios can be created using this computer-based system. The system is analogous to a word processor and someone with only modest computer experience can use it. Our software group is addressing specific criticisms of the interface in an effort to facilitate use and acceptability. Visual and haptic synchronization is crucial to support a surgical exercise such that the student can follow the points emphasized by the author without being distracted by the haptic or visual artifacts.

This approach to procedural education allows an expert to create immediately deployable multimedia exercises with good haptic and visual fidelity. A student or resident can use the package to prepare for the conditions constructed by the expert. The immediacy and direct user programmability make this a unique educational platform that takes advantage of current advances in technology [3].

Finally, we plan to deploy this system so that anyone can contribute their operative experiences. In effect, it could become an interactive web-based atlas to which many experienced surgeons can contribute.

Notes

Acknowledgments

This project was sponsored by the University of Florida College of Medicine Chapman Education Center Grant.

References

  1. 1.
    Kim M, Punak S, Cendan J, Kurenov S, Peters J (2006) Exploiting graphics hardware for haptic authoring. Stud Health Technol Inform 119: 255-260PubMedGoogle Scholar
  2. 2.
    McColl R, Brown I, Seligman C, Lim F, Alsaraira A (2006) Haptic rendering for VR laparoscopic surgery simulation. Australas Phys Eng Sci Med 29(1): 73-78PubMedGoogle Scholar
  3. 3.
    Satava RM (2006) Looking forward. Surg Endosc 20(Suppl 2): S503-S504PubMedCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2006

Authors and Affiliations

  • Juan Cendan
    • 1
  • Minho Kim
    • 3
  • Sergei Kurenov
    • 2
  • Jorg Peters
    • 3
  1. 1.University of Florida College of MedicineGainesvilleUSA
  2. 2.Department of SurgeryUniversity of FloridaGainesvilleUSA
  3. 3.Department of Computer and Information Science and EngineeringUniversity of FloridaGainesvilleUSA

Personalised recommendations