Surgical Endoscopy

, Volume 26, Issue 12, pp 3413–3417 | Cite as

Real-time three-dimensional soft tissue reconstruction for laparoscopic surgery

  • Jędrzej Kowalczuk
  • Avishai Meyer
  • Jay Carlson
  • Eric T. Psota
  • Shelby Buettner
  • Lance C. Pérez
  • Shane M. Farritor
  • Dmitry Oleynikov
Article

Abstract

Background

Accurate real-time 3D models of the operating field have the potential to enable augmented reality for endoscopic surgery. A new system is proposed to create real-time 3D models of the operating field that uses a custom miniaturized stereoscopic video camera attached to a laparoscope and an image-based reconstruction algorithm implemented on a graphics processing unit (GPU).

Methods

The proposed system was evaluated in a porcine model that approximates the viewing conditions of in vivo surgery. To assess the quality of the models, a synthetic view of the operating field was produced by overlaying a color image on the reconstructed 3D model, and an image rendered from the 3D model was compared with a 2D image captured from the same view.

Results

Experiments conducted with an object of known geometry demonstrate that the system produces 3D models accurate to within 1.5 mm.

Conclusions

The ability to produce accurate real-time 3D models of the operating field is a significant advancement toward augmented reality in minimally invasive surgery. An imaging system with this capability will potentially transform surgery by helping novice and expert surgeons alike to delineate variance in internal anatomy accurately.

Keywords

Augmented reality Computer-integrated surgery Image-based reconstruction Minimally invasive surgery Real-time stereo matching Robotic surgery 

The availability of remotely actuated surgical controls and digital video feedback presents an opportunity to create an enhanced surgical experience through augmented reality and semi-automated surgery. Proposed enhancements using augmented reality include identification of the surgical tools in the operating field [1], automated classification of organs and tissues [2], and substitution of visual information for tactile feedback [3]. Whereas augmented reality is capable of providing enhanced visual feedback to the surgeon, semi-automated surgery has the potential to revolutionize surgical technique by removing the burden of routine surgical motions used to differentiate structure and tissue planes.

With respect to digital video feedback, one of the most significant challenges is to generate an accurate real-time 3D model of the operating field. Although computed tomography (CT) and magnetic resonance imaging (MRI) scans can generate highly accurate 3D models of the operative anatomy, they are not suitable for real-time applications in dynamic environments such as the abdomen or the thorax. Breathing, cardiac pulsations, and peristalsis can cause nonrigid motions of tissues in the surgical environment, thus limiting these radiographic tools to pre- and postoperative applications.

An alternative approach to generating a real-time 3D model of the operating field is to use stereoscopic digital video feedback with 3D image-based reconstruction [4, 5, 6]. Two of the most significant difficulties in creating 3D models via 3D image-based reconstruction of the operating field involve capturing high-quality digital video and developing a real-time 3D image-based reconstruction algorithm with sufficient accuracy.

Image-based systems used for computer-integrated surgery have been proposed and tested both for preoperative planning and during surgery [7]. Using preoperative CT scans and a probabilistic model, Hu et al. [8] reconstructed the 3D structure of a beating heart, providing the surgeon with a wider effective field of view of the environment. Also using preoperative CT scans, Figl et al. [9] proposed an augmented reality system interfaced with a da Vinci Surgical System (Intuitive Surgical, Sunnyvale, CA, USA) for performing a totally endoscopic coronary artery bypass that overlaid supplementary information on the stereoscopic view of the operating field.

3D reconstruction of the operating field can be accomplished without CT or MRI scans by applying image-based techniques that perform stereo matching on the images obtained from a stereoscopic camera. Stereo matching is the process of finding the positions of objects common to the left and right images of a scene. Stereo matching is especially complicated in a surgical environment due to reflections of the light source and low-contrast surfaces [10]. However, new iterative stereo matching algorithms have successfully addressed these challenges [6, 11, 12]. In particular, the iterative adaptive support-weight stereo matching algorithm developed by Psota et al. [12] provides high-quality results, and its structure is suitable for implementation on parallel hardware such as a graphics processing unit (GPU).

Methods and materials

To achieve high-quality real-time 3D models of the operating field, a system is proposed that combines a custom high-definition digital stereoscopic camera and a graphics processing unit (GPU)-accelerated implementation of the iterative adaptive support-weight stereo matching algorithm. A flow diagram of the complete, real-time 3D system is shown in Fig. 1.
Fig. 1

Flow diagram of the proposed digital stereoscopic camera, 3D display, and 3D model-generation process

The two video streams captured by the camera are used in two complementary ways. First, they are routed into a commercial 3D display that provides a surgeon wearing the appropriate glasses with depth perception. Second, the video streams are routed into a computer with a GPU that executes the parallelized stereo matching algorithm to generate a geometrically accurate 3D model of the surgical environment in real time.

As shown in Fig. 1, the video streams are first aligned with each other using a mapping derived from a one-time calibration of the stereoscopic camera. Next, a stereo matching algorithm is applied to the video streams to determine the position of objects common to both images of the operating field. Stereo matching produces a mapping between corresponding pixels in the two images. By itself, this mapping does not provide meaningful information for calculating absolute positions in space. Rather, the mapping must be combined with information about the cameras to back-project corresponding pixels into 3D space. This results in a per-pixel depth map of the operating field that, when combined with a known camera position, can be used to calculate the absolute positions in space of each pixel captured in the images. Finally, to visualize these data, the original image is overlaid on the resulting 3D model.

Due to the high computational complexity of the stereo matching algorithm, it is necessary to distribute the operations among the many parallel processors provided by the GPU to achieve real-time performance. The chosen GPU platform is an NVIDIA GeForce GTX 580 (NVIDIA Corporation, Santa Clara, CA, USA) with 16 processors, each containing 32 cores, operating at 1.544 GHz.

The accuracy of the 3D models generated using the proposed system was first evaluated by comparing a set of measurements of a 3-Dmed Signature Soft Tissue Practice Pad (3-Dmed, Franklin, OH, USA) obtained manually with measurements extracted from the 3D model. Figure 2 shows the soft tissue practice pad together with seven different measurements (indicated by the yellow lines) used in the evaluation. Figure 3 shows the reconstructed 3D model generated by the system of Fig. 1 from which the corresponding measurements were extracted. The system also was evaluated in a nonsurvival procedure on a porcine model approved by the Institutional Animal Care and Use Committee (IACUC). In this experiment, five 10-s-long video sequences were captured and processed by the system.
Fig. 2

A signature soft tissue practice pad used to evaluate the accuracy of the 3D model. Original image with separate alphabetized measurements

Fig. 3

3D model using stereoscopic image-based reconstruction of the original object (Fig. 2)

Results

The calculated distances of the first experiment are given in Table 1. They show that the measurements extracted from the 3D model differ from those obtained manually by <1.5 mm, resulting in a mean absolute error of 0.637 mm. The minimal amount of error between these measurements illustrates the accuracy that the proposed 3D modeling system can provide.
Table 1

Results from the accuracy evaluation of the proposed real-time 3D modeling system

Segment

Manually measured distance (mm)

Extracted measured distance (mm)

Signed error (mm)

AB

52.3

52.65

−0.35

CD

18.0

19.04

−1.04

EF

21.7

22.4

−0.7

GH

12.6

12.2

0.4

IJ

24.8

24.18

0.62

KL

46.1

46.07

0.03

MD

49.6

48.28

1.32

The system also was tested in a porcine model to validate the surgical application for generating real-time 3D models. The synthetic view shown in Fig. 5 was created from the top-down stereoscopic images of the operating field, one of which is shown in Fig. 4, to illustrate visually the accuracy of the 3D model. The 3D model shown in the synthetic view of Fig. 5 is oriented such that it resembles the side view given in Fig. 6. The yellow box bounds the portion of the operating field that was reconstructed using the stereoscopic images taken from the top-down view. The accuracy of the reconstruction can be qualitatively evaluated by comparing the synthetic view with the actual image.
Fig. 4

Original top-down view of the porcine intestines

Fig. 5

A synthetic view of Fig. 4 created and rotated using the proposed 3D modeling system

Fig. 6

View from the right side showing that the synthetic view (Fig. 5) is an accurate approximation of an actual view taken from the same angle of rotation

Discussion

A new system for generating real-time 3D models of the operating field was tested. This system uses the digital video feedback provided by a custom miniaturized stereoscopic video camera. To achieve real-time performance while generating highly accurate 3D models, an iterative stereo matching algorithm was implemented on a massively parallel GPU platform.

The accuracy of the 3D models was quantitatively evaluated by comparing a set of manual measurements with measurements extracted from the 3D model. These results show that the 3D reconstruction is accurate within 1.5 mm of the manually measured values, with a mean absolute error of 0.637 mm.

Further experiments demonstrated that the proposed system is capable of producing accurate 3D models and synthetic views of the operating field. Synthetic views not only illustrate the visual quality of the 3D model but also provide surgeons with the ability to change their visual perspective while operating.

Because this technology yields a geometrically accurate model of reality, there are multiple future applications including its use in enhanced tissue differentiation for real-time evaluation, semi-automatic robotic surgery, and educational simulation surgery. Our system includes a stereoscopic camera and a state-of-the-art stereo matching algorithm and also can be connected to a 3D viewing monitor. This setup is mobile and can be transported easily into any operating room.

Our current experiment examined an open abdomen and was able to create an accurate, real-time 3D model. However, with a stereoscopic laparoscope (not used in this experiment), this model could be obtained in routine laparoscopic surgery. The practical implications of these possibilities are inviting in that this technology can assist the operating surgeon in gaining a more enhanced visualization of a potentially hostile operative field. Additionally, information regarding the altered tissue planes could assist in the earlier decision to convert to an open procedure, thereby avoiding iatrogenic injury. The images obtained can be maneuvered in real time such that a 3D spatial arrangement of the surrounding structures as well as the primary structure can be seen. Undoubtedly, an enhanced understanding of the target structure’s relationship to adjacent tissue can assist in reducing iatrogenic injuries and identifying important tissue planes.

The described technology gives the user the option to identify structures, label objects using text, and assign them a certain color on the screen. The novice surgeon can spend time in this augmented reality while working on varying simulation scenarios, or more practically, a certain tissue can be identified as “do not touch,” with the image and orientation continuously updated as more of the procedure is accomplished.

In the future, real-time 3D models can be fully integrated with a robotic platform. This technology will make it possible to automate robotic movements, in which the operator locates a point in a scene and instructs the robot to move to that point. The real-time 3D models also could be compared against a library of precomputed anatomic models for more sophisticated tissue identification and classification.

Notes

Disclosures

Jędrzej Kowalczuk, Avishai Meyer, Jay Carlson, Eric T. Psota, Shelby Buettner, Lance C. Pérez, Shane M. Farritor, and Dmitry Oleynikov have no conflicts of interest or financial ties to disclose.

References

  1. 1.
    Devernay F (2001) 3D Reconstruction of the operating field for image overlay in 3D-endoscopic surgery. In: Proceedings of the IEEE and ACM international symposium on augmented reality (ISAR’01), Washington, DC, pp 191–193Google Scholar
  2. 2.
    Su LM, Vagvolgyi BP, Agarwal R, Reiley CE, Taylor RH, Hager GD (2009) Augmented reality during robot-assisted laparoscopic partial nephrectomy: toward real-time 3D-CT to stereoscopic video registration. Urology 73:896–900PubMedCrossRefGoogle Scholar
  3. 3.
    Bethea B, Okamura A, Kitagawa M, Fitton T, Cattaneo S, Gott V, Baumgartner D, Yuh WA (2004) Application of haptic feedback to robotic surgery. J Laparoendosc Adv Surg Tech 14:191–195CrossRefGoogle Scholar
  4. 4.
    Stoyanov D, Darzi A, Yang GZ (2005) A practical approach towards accurate dense 3D depth recovery for robotic laparoscopic surgery. Comput Aided Surg 10:199–208PubMedGoogle Scholar
  5. 5.
    Cano González AM, Sánchez-González P, Sánchez-Margallo FM, Oropesa I, Pozo F, Gómez EJ (2009) Video-endoscopic image analysis for 3D reconstruction of the surgical scene. In: 4th European conference of the international federation for medical and biological engineering, vol 22, Antwerp, Belgium, pp 923–926Google Scholar
  6. 6.
    Stoyanov D, Scarzanella M, Pratt P, Yang GZ (2010) Real-time stereo reconstruction in robotically assisted minimally invasive surgery. Med Image Comput Comput Assist Interv 13:275–282PubMedGoogle Scholar
  7. 7.
    Taylor R, Stoianovici D (2003) Medical robotics in computer-integrated surgery. IEEE Trans Robot Autom 19:765–781CrossRefGoogle Scholar
  8. 8.
    Hu M, Penney G, Rueckert D, Edwards P, Bello F, Casula R, Figl M, Hawkes D (2009) Nonrigid reconstruction of the beating heart surface for minimally invasive cardiac surgery. In: Proceedings of medical image computing and computer-assisted, London, pp 34–42Google Scholar
  9. 9.
    Figl M, Rueckert D, Hawkes D, Casula R, Hu M, Pedro O, Zhang DP, Penney D, Bello F, Edwards P (2010) Image guidance for robotic minimally invasive coronary artery bypass. Comput Med Imaging Graphics 34:61–68CrossRefGoogle Scholar
  10. 10.
    Stoyanov D, Darzi A, Yang G (2004) Dense 3D depth recovery for soft tissue deformation during robotically assisted laparoscopic surgery. In: Proceedings of medical image computing and computer-assisted intervention, Springer, Heidelberg, pp 41–48Google Scholar
  11. 11.
    Lo B, Scarzanella M, Stoyanov D, Yang GZ (2008) Belief propagation for depth cue fusion in minimally invasive surgery. MICCAI 2:104–112Google Scholar
  12. 12.
    Psota ET, Kowalczuk J, Carlson J, Pérez LC (2011) A local iterative refinement method for adaptive support-weight stereo matching. In: International conference on image processing, computer vision and pattern recognition (IPCV), Las Vegas, pp 271–277Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2012

Authors and Affiliations

  • Jędrzej Kowalczuk
    • 1
  • Avishai Meyer
    • 2
  • Jay Carlson
    • 1
  • Eric T. Psota
    • 1
  • Shelby Buettner
    • 2
  • Lance C. Pérez
    • 1
  • Shane M. Farritor
    • 1
  • Dmitry Oleynikov
    • 2
  1. 1.University of Nebraska-LincolnLincolnUSA
  2. 2.University of Nebraska Medical CenterOmahaUSA

Personalised recommendations