Abstract
We have developed a liver surgery support system that uses two depth cameras and measures positional relationships between a surgical knife and a liver in real time. In this report, the overview of our system, the method for depth camera calibration, the estimation for knife tip positioning, and some experimental results are described.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
- Depth Camera
- Micron Tracker
- General-purpose Computing On Graphics Processing Units (GPGPU)
- Surgical Support System
- Sound Frequency Increases
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Introduction
We have developed a new support system for liver surgery based on IT technology. In our system, the positional relationship between a surgical knife and a liver is measured in real time. The goals are that warnings such as flashing red lights or alarms indicate to the surgeons to be careful when the knife comes too close to high risk areas, and that optimal guides for knife motion are displayed in order to completely remove cancerous cells and retain a maximal healthy portion of the liver.
2 System Overview
Figure 1 shows the overview of our surgical support system. Before the operation, a 3D model of a patient’s liver is generated from computed tomographic images. During the operation, the position of the knife and the patient’s liver are measured by two depth cameras with different features that are mounted over the operating table. The position, orientation, deformation, and incision of the liver are calculated by GPGPU (general-purpose computing on graphics processing units) in real time by matching the measured liver shape by the depth sensor to the 3D model. Details of this process are not covered in this report. For further information, refer to [1, 2].
Our system uses two depth cameras with different features. The first is a MicronTracker3 (model H3-60). It is a marker tracking camera system with high precision and it is used to measure knife positioning [3]. The detailed specifications for this sensor are listed in Table 1. The second is a KINECT for Windows v2 sensor. It has middle precision in depth measurement and wide measuring range and it used to measure the shape of a liver. These cameras have to be located at a distance and their optical axes cannot coincide. To transform these coordinate systems, calibration matrix should be created.
3 Depth Camera Calibration
To calibrate the position and orientation of these two depth cameras, we used different markers and measured their positions using each sensor. \( \varvec{p}_{i}^{\text{MT}} = \left( {\begin{array}{*{20}c} {x_{i}^{\text{MT}} } & {y_{i}^{\text{MT}} } & {z_{i}^{\text{MT}} } \\ \end{array} } \right)^{T} \) and \( \varvec{p}_{i}^{\text{kinect}} = \left( {\begin{array}{*{20}c} {x_{i}^{\text{kinect}} } & {y_{i}^{\text{kinect}} } & {z_{i}^{\text{kinect}} } \\ \end{array} } \right)^{T} \) are 3D coordinates of each marker which are measured by MicronTracker3 and KINECT sensor, and \( i = 0, \ldots ,{\text{N}} \) means marker identification number. Figure 2 shows the markers used for our calibration. Eight markers (\( {\text{N}} = 8 \)) are used in this experiment. They are printed on adhesive printer sheets and attached to acrylic boxes. By solving the following Eq. (1), the calibration matrix \( {\mathbf{M}} \) is calculated.
To acquire a proper and precise matrix, all markers should not be placed in the same plane and \( n \) should be a large number. Marker sizes used in this experiment are H30Â Ă—Â W50Â mm and H40Â Ă—Â W50Â mm.
4 Estimation of Knife Tip Position
It is difficult to measure the knife tip position directly and without contact during the operation because it gets covered in blood and is hidden in the incised portion of the skin or organ. Therefore, we use a marker attached to the top of the knife. Figure 3 shows a real electrosurgical knife (a) and a model knife with the markers that we are developing. To accurately and robustly track the knife, many markers are placed in each direction.
Before the operation, the relative vector between each marker and the tip has to be measured. To acquire a relative vector, one must set the tip of the knife to the origin point \( \varvec{p}_{\text{table}}^{\text{MT}} \) of the fix marker \( {\text{C}}_{\text{table}} \) on the flat table (Fig. 4). \( \varvec{p}_{\text{table}}^{\text{MT}} \) is measured by MicronTracker3. The position \( \varvec{p}_{\text{knife}}^{\text{MT}} \) and orientation \( {\mathbf{R}}_{\text{knife}}^{\text{MT}} \) of the marker attached to the knife \( {\text{C}}_{\text{knife}} \) are measured in \( \Sigma_{\text{MT}} \) and are also measured by MicronTracker3. \( \Sigma_{\text{MT}} \) and \( \Sigma_{\text{knife}} \) are the coordinate systems of MicronTracker3 and knife, respectively. The relative vector \( \varvec{p}_{\text{rel}}^{\text{knife}} \) is calculated by
in \( \Sigma_{\text{knife}} \). To convert \( \varvec{p}_{\text{rel}}^{\text{knife}} \) to \( \varvec{p}_{\text{rel}}^{\text{MT}} \) in \( \Sigma_{\text{MT}} \),
Therefore, the knife tip position \( \varvec{p}_{\text{tip}}^{\text{MT}} \) is calculated by
where \( \varvec{p}_{\text{knife}}^{{ ' {\text{MT}}}} \) and \( {\mathbf{R}}_{\text{knife}}^{{ ' {\text{MT}}}} \) mean the position and orientation of \( {\text{C}}_{\text{knife}} \) during the operation.
A point \( \varvec{p}^{\text{MT}} \) which is measured in \( \Sigma_{\text{MT}} \) can be converted to a point \( \varvec{p}^{\text{kinect}} \) in the coordination system of the KINECT \( \Sigma_{\text{kinect}} \) by using the calibration matrix \( {\mathbf{M}} \) derived in the previous section.
5 Experiment and Results
By combining the above calculations, we implemented a pilot study system and conducted preliminary experiments (Figs. 5 and 6). Before we began, we used a 3D printer to make a life-size model of a liver with a red-colored material. The original data used to create the model liver was taken from a CT (computed tomography) image from a patient. The model liver was placed on a table and two depth sensors were mounted above the table and directed downward to capture it.
This system has the ability to give visual and audio warnings. As the tip of the knife approaches to the liver, and the color of the knife tip on the monitor changes from green to red (Fig. 7) and the sound frequency increases. These experimental results showed the feasibility of our system.
References
Noborio, H., Onishi, K., Koeda, M., Mizushino, K., Yagi, M., Kaibori, M., Kwon, M.: Motion transcription algorithm by matching corresponding depth image and Z-buffer. In: Proceedings of the 10th Anniversary Asian Conference on Computer Aided Surgery (ACCAS2014), pp. S5–3, Fukuoka, Japan, June 2014
Noborio, H., Onishi, K., Koeda, M., Mizushino, K., Kunii, T., Kaibori, M., Kon, M., Chen, Y.: A fast surgical algorithm operating polyhedrons using Z-Buffer in GPU. In: Proceedings of the 9th Asian Conference on Computer Aided Surgery (ACCAS 2013), pp. 110–111, Tokyo, Japan, September 2013
Claron Technology Inc. http://www.clarontech.com/microntracker.php
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Koeda, M. et al. (2015). Depth Camera Calibration and Knife Tip Position Estimation for Liver Surgery Support System. In: Stephanidis, C. (eds) HCI International 2015 - Posters’ Extended Abstracts. HCI 2015. Communications in Computer and Information Science, vol 528. Springer, Cham. https://doi.org/10.1007/978-3-319-21380-4_84
Download citation
DOI: https://doi.org/10.1007/978-3-319-21380-4_84
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-21379-8
Online ISBN: 978-3-319-21380-4
eBook Packages: Computer ScienceComputer Science (R0)