Abstract
Minimally invasive surgery performed under endoscopic video is a viable alternative to several types of open abdominal surgeries. Advanced visualization techniques require accurate patient registration, often facilitated by reconstruction of the organ surface in situ. We present an active system for intraoperative surface reconstruction of internal organs, comprising a single-plane laser as the structured light source and a surgical endoscope camera as the imaging system. Both surgical instruments are spatially calibrated and tracked, after which the surface reconstruction is formulated as the intersection problem between line-of-sight rays (from the surgical camera) and the laser beam. Surface target registration error after a rigid-body surface registration between the scanned 3D points to the ground truth obtained via CT is reported. When tested on an ex vivo porcine liver and kidney, root-mean-squared surface target registration error of 1.28 mm was achieved. Accurate endoscopic surface reconstruction is possible by using two separately calibrated and tracked surgical instruments, where the trigonometry between the structured light, imaging system, and organ surface can be optimized. Our novelty is the accurate calibration technique for the tracked laser beam, and the design and the construction of laser apparatus designed for robotic-assisted surgery.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Minimally invasive surgery (MIS) is a viable surgical approach for many abdominal intervention including liver resection and partial nephrectomy [8]. In these interventions, the multi-port approach is the current standard of care, where multiple incisions are created to allow access of surgical instruments into the abdominal cavity. An endoscopic camera is used as a surrogate for direct human vision. Advanced visualization, such as overlaying of subsurface anatomical details onto endoscopic video is only possible if both surgical camera and patient anatomy are spatially tracked in a common coordinate system, and accurate camera calibration [7] and patient registration [2] can be achieved in vivo.
As an intraoperative imaging modality, the surgical camera can be used as a localizer to facilitate patient registration. Three dimensional (3D) surface reconstruction techniques in the current literature [4, 5] can be categorized as either passive or active [6]. Passive methods employ only the acquired images to detect anatomical features to reconstruct a dense surface of the surgical scene. However, such approaches are computationally intensive and suffer from feature-less surfaces with specular highlights. Active methods project structured light into the abdominal cavity, replacing natural features with light patterns. These patterns serve as the basis for surface reconstruction using trigonometry.
We present the ‘EndoScan’, an active system for endoscopically performing 3D surface reconstruction of abdominal organs. The system comprises two optically tracked surgical instruments: a surgical endoscope camera and a plane laser source, each with a 13 mm to 15 mm outer diameter form factor. We envision this system being integrated into existing endoscopic surgical navigation systems where the surgical camera and plane laser source enter the abdomen via separate ports. Once the target organ is scanned, it can be used for rigid-body registration [1] with preoperative patient data, or serve to initialize subsequent deformable registration [2]. The proposed system was validated by means of CT registration with 3D surface scans obtained from the EndoScan, where the surface Target Registration Error [3] (surface TRE) is reported.
2 Methods and Materials
The proposed 3D scanner employs a laser projection system as a means of introducing structured light into the abdominal cavity, and an imaging system for the acquisition of the projected laser pattern. Both subsystems are spatially tracked by an optical tracking system (Spectra, Northern Digital Inc., Canada) and are designed to be compatible with the da Vinci robotic system (Intuitive Surgical Inc., USA).
A stereo laparoscope (Surgical laparoscope, Olympus) was used as the imaging subsystem (Fig. 2a). An optical dynamic reference frame (DRF), denoted by (\(\mathbf {C}\)), was rigidly attached at the handle of the laparoscope (Fig. 2a). The hand-eye calibration between the optical axis of the right channel camera (\(\mathbf {O}\)) and its DRF (\(\mathbf {C}\)) was performed [7] and denoted as \(\mathbf {^CT_O}\) (Fig. 1b). Video was captured as \(800\times 600\) pixel image and image distortions were removed prior to any subsequent image processing.
A red laser diode (5 mW, 650 nm) with a diffractive lens (plane divergence: \(120^\circ \)) was integrated into the tip of a medical grade stainless steel tube (outer diameter: 15 mm, length: 38 cm) as part of the laser projection subsystem (Fig. 2b). The laser diode is controlled by a commercial microcontroller (Atmel, USA), capable of outputting 40 mA at 5 V. All electronic components were housed at the distal end of the stainless steel tube, to which a DRF (\(\mathbf {L}\)) is rigidly attached (Fig. 2b). Serial communication and power to the laser instrument is provided via a standard USB connection from the host computer.
2.1 Laser Beam Calibration
The optical tracker records the location and orientation of the DRF directly, therefore, the relationship between the laser beam and the laser DRF (\(\mathbf {L}\)) must be calibrated. The laser beam can be represented by a point on the beam (\(\mathbf {o}\)) and its plane normal (\(\mathbf {n}\)). The laser beam calibration determines the pair (\(\mathbf {o}\), \(\mathbf {n}\)) in the DRF coordinate system (\(\mathbf {L}\)) (Fig. 1).
A calibration phantom, a raised metal block with a thin engraved line attached to a DRF \(\mathbf {P}\), was developed (Fig. 3a). The end points of the engraved line, (\(\mathbf {p_{1}}\), \(\mathbf {p_{2}}\)), are known in \(\mathbf {P}\) by manufacturing. To calibrate the orientation of the plane laser beam, it must be aligned with the engraved line (Fig. 3b). Once aligned, the paired points (\(\mathbf {p_{1}}\), \(\mathbf {p_{2}}\)) must lie on the plane of the laser beam (Fig. 1a):
where the points (\(\mathbf {p'_{1}}\), \(\mathbf {p'_{2}}\)) are the end points of the engraved line specified in \(\mathbf {L}\) (Fig. 1a), while \(\mathbf {^{w}T_{P}}\) and \(\mathbf {^{w}T_{L}}\) are the rigid body tracking poses of the DRFs attached to the line phantom and the laser instrument, respectively. After n acquisitions, a set of 2n points is measured via Eq. (1) and the laser beam geometry (\(\mathbf {o}\), \(\mathbf {n}\)) can be computed using any plane fitting algorithm.
2.2 Surface Reconstruction as Line-to-Plane Intersection
Using the ideal pinhole camera model, for a given camera intrinsic matrix A, a point \(Q=(X,Y,Z)^T\) in the 3D optical axis \(\mathbf {O}\) can be projected onto the image:
where \(q=(u,v,w)^T\). Given a pixel location, the corresponding ray (emanating from the camera center) can be computed by:
where a pixel is represented in the canonical camera coordinate system of \(q=(u,v,1)^T\). For each pixel in the image coinciding with the laser projection, a line-of-sight ray can be projected by Eq. (3), and
where \(Q'\) is the normalized line-of-sight ray specified in the world coordinate system, and \(C=[0,0,0]^T\) is the camera origin and \(C'\) is the camera center specified in the world coordinate system of the tracker (\(\mathbf {W}\)). Simultaneously, the pose of the laser beam is known via tracking:
where the pair \((\mathbf {n}', \mathbf {o}')\) specifies the pose of the laser beam in \(\mathbf {W}\). Assuming \(Q'\) and \(\mathbf {n}'\) are not perpendicular, the intersection between the line-of-sight ray and the laser beam can be computed:
where \(^{W}q\) is a point on organ surface intersected by the laser beam specified in \(\mathbf {W}\). The intrinsic matrix A is determined as part of the hand-eye calibration [7].
2.3 Validation
To assess the proposed surface scanning system, a validation setup similar to that described in [5] was constructed. An ex vivo phantom with a porcine kidney and a lobe of porcine liver rigidly secured in a torso box was constructed (Fig. 4c). A CT scan of the torso phantom was acquired using an O-Arm (Medtronic, Ireland) (Fig. 4a), serving as the ground truth for subsequent analysis.
Two entry ports were made on the torso phantom: one close to the umbilicus for the endoscope camera, and the other at the lower abdominal region for the laser system (Fig. 4b). These locations were chosen to mimic a typical MIS abdominal surgical approach. During the scanning procedure, the endoscope camera was held rigidly using a stabilizer, while the laser apparatus was swept free-hand by an operator. The distances from the organ surface to endoscope camera and the laser instrument were roughly 10 cm and 15 to 20 cm, respectively (Fig. 4c), while the angle between the instruments was roughly \(40^\circ \) (Fig. 4d). Total scan time was approximately 3 min.
3 Results
The spatial calibration between the laser beam to its DRF was achieved using 18 measurements (Fig. 3b), acquired under 5 min. The RMS distance between these 36 acquired points to the fitted plane equation was 0.83 mm.
Two scans of the organ surface were acquired. First, the camera was rigidly mounted with the viewing axis of the camera centered over the liver (Fig. 4b). The laser beam projection image was segmented and reconstructed in 3D using methods described in Sect. 2.2. Once reconstructed in \(\mathbf {W}\), the 3D surface was registered to the CT organ scan via rigid-body ICP [1].
Accuracy of the liver surface reconstruction is visualized in Fig. 5a. The position of the laser scan line area corresponds approximately to the fixed viewing area of the camera. 127 images (scanlines) were acquired, resulting in 33571 points in the laser scan of the liver. After the rigid-body ICP registration, the Euclidean distance between each vertex on the scanline and the surface was computed, and summarized in Table 1.
A subsequent scan was performed where the kidney and the liver were positioned in the camera view. First, the center of the kidney is centered in the camera viewing area, followed by the junction between the kidney and liver being centered in the camera viewing area. In our torso phantom, the junction between the kidney and liver exhibits high surface curvature. 97 images were acquired, resulting in a total of 43910 vertices on the laser scanlines. The Euclidean distance error between the laser scanlines and the CT surface after rigid-body ICP registration is summarized in Table 1. Similar to the previous experiment, vertices with surface normals perpendicular to the camera viewing axis and those at the edge of the camera viewing area, tend to exhibit higher than average registration error (Fig. 5). By repositioning the camera and optimizing the laser beam orientation, a surface reconstruction with sub-millimeter accuracy was achieved (Table 1).
The results of the ICP registration between the laser scanlines and CT surface is shown in Table 1. More than 90% of the vertices exhibit surface TRE of less than 2 mm, resulting in a submillimeter mean surface TRE. The surface reconstruction generated by our proposed system provides an accurate means of performing registration of multi-modal patient-specific data. These data can be used to enhance the visualization of the surgical scene (Fig. 6).
4 Discussion and Conclusion
A 3D surface scanning system for multi-port MIS abdominal surgeries is presented. Based on a novel calibration framework, the dynamic spatial relationship between the camera and the laser instruments are known via preoperative calibration and intraoperative tracking. This allows us to formulate the surface scanning as the intersection problem between line-of-sight rays and the laser beam, and to optimize the trigonometry between the organ surface, the structured light, and the imaging system. In ex vivo experiments, our system achieves sub-millimeter surface reconstruction that is competitive to other systems [4, 5].
A commercial clinical endoscope was integrated to minimize impact on surgical work-flow. Both camera and laser calibrations require minimal image acquisition (typically 10 to 12) and user interaction [7]. Our proposed system can be readily applied to other surgical scenarios such as neuro and orthopaedic surgery, and is designed to be compatible with the da Vinci surgical robotic systems.
In contrast to single-shot methods [5], our system requires multiple images to reconstruct a surface, and cannot account for motion of the organ during the scan (such as those due to breathing and cardiac motion). Using a single-beam laser allows us to reconstruct a 3D scanline in real-time since the segmentation of a single laser projection is trivial, the computational requirement for our system is extremely low. Accurate segmentation of the laser projection from endoscopy image is crucial for surface reconstruction. Laser planes perpendicular to the organ surface, result in thin and accurate segmentation. Conversely, if the incident angle is oblique, the laser projection is diffused in appearance and the true incident pixels are ambiguous to locate. Lower surface TRE was achieved in the second validation experiment where both organs were scanned, possibly since the laser beam was carefully adjusted to produce thin projections.
In our approach, we explored the efficacy of using two separately tracked devices in the context of multi-port MIS, where the binocular disparity is known via intraoperative tracking and preoperative calibration. Since the mathematical principle governing the system is trigonometry, there exists a strong angular dependence between the surgical camera and laser plane source and the system accuracy. If the two subsystems were integrated into a single device, the minimal angle between the light projection and optical axis would limit the surface reconstruction accuracy [4]. This study demonstrated the proposed system’s ability to accurately reconstruct organ surfaces.
References
Besl, P.J., McKay, N.D.: A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992)
Hill, D.L.G., Batchelor, P.G., Holden, M., Hawkes, D.J.: Medical image registration. Phys. Med. Biol. 46(3), R1–R45 (2001)
Ma, B., Ellis, R.E.: Analytic expressions for fiducial and surface target registration error. In: Larsen, R., Nielsen, M., Sporring, J. (eds.) MICCAI 2006. LNCS, vol. 4191, pp. 637–644. Springer, Heidelberg (2006). https://doi.org/10.1007/11866763_78
Maier-Hein, L., et al.: Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery. Med. Image Anal. 17(8), 974–996 (2013)
Maier-Hein, L., et al.: Comparative validation of single-shot optical techniques for laparoscopic 3-D surface reconstruction. IEEE Trans. Med. Imaging 33(10), 1913–1930 (2014)
Mirota, D.J., Ishii, M., Hager, G.D.: Vision-based navigation in image-guided interventions. Ann. Rev. Biomed. Eng. 13(1), 297–319 (2011)
Morgan, I., Jayarathne, U., Rankin, A., Peters, T.M., Chen, E.C.S.: Hand-eye calibration for surgical cameras: a procrustean perspective-n-point solution. Int. J. Comput. Assisted Radiol. Surg. 12(7), 1141–1149 (2017)
Vibert, E., Perniceni, T., Levard, H., Denet, C., Shahri, N.K., Gayet, B.: Laparoscopic liver resection. Br. J. Surg. 93(1), 67–72 (2006)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Geurten, J., Xia, W., Jayarathne, U., Peters, T.M., Chen, E.C.S. (2018). Endoscopic Laser Surface Scanner for Minimally Invasive Abdominal Surgeries. In: Frangi, A., Schnabel, J., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. MICCAI 2018. Lecture Notes in Computer Science(), vol 11073. Springer, Cham. https://doi.org/10.1007/978-3-030-00937-3_17
Download citation
DOI: https://doi.org/10.1007/978-3-030-00937-3_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-00936-6
Online ISBN: 978-3-030-00937-3
eBook Packages: Computer ScienceComputer Science (R0)