Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

We have developed a new support system for liver surgery based on IT technology. In our system, the positional relationship between a surgical knife and a liver is measured in real time. The goals are that warnings such as flashing red lights or alarms indicate to the surgeons to be careful when the knife comes too close to high risk areas, and that optimal guides for knife motion are displayed in order to completely remove cancerous cells and retain a maximal healthy portion of the liver.

2 System Overview

Figure 1 shows the overview of our surgical support system. Before the operation, a 3D model of a patient’s liver is generated from computed tomographic images. During the operation, the position of the knife and the patient’s liver are measured by two depth cameras with different features that are mounted over the operating table. The position, orientation, deformation, and incision of the liver are calculated by GPGPU (general-purpose computing on graphics processing units) in real time by matching the measured liver shape by the depth sensor to the 3D model. Details of this process are not covered in this report. For further information, refer to [1, 2].

Fig. 1.
figure 1figure 1

System overview of our liver surgery support system

Our system uses two depth cameras with different features. The first is a MicronTracker3 (model H3-60). It is a marker tracking camera system with high precision and it is used to measure knife positioning [3]. The detailed specifications for this sensor are listed in Table 1. The second is a KINECT for Windows v2 sensor. It has middle precision in depth measurement and wide measuring range and it used to measure the shape of a liver. These cameras have to be located at a distance and their optical axes cannot coincide. To transform these coordinate systems, calibration matrix should be created.

Table 1. Specification of MicronTracker3 H3-60

3 Depth Camera Calibration

To calibrate the position and orientation of these two depth cameras, we used different markers and measured their positions using each sensor. \( \varvec{p}_{i}^{\text{MT}} = \left( {\begin{array}{*{20}c} {x_{i}^{\text{MT}} } & {y_{i}^{\text{MT}} } & {z_{i}^{\text{MT}} } \\ \end{array} } \right)^{T} \) and \( \varvec{p}_{i}^{\text{kinect}} = \left( {\begin{array}{*{20}c} {x_{i}^{\text{kinect}} } & {y_{i}^{\text{kinect}} } & {z_{i}^{\text{kinect}} } \\ \end{array} } \right)^{T} \) are 3D coordinates of each marker which are measured by MicronTracker3 and KINECT sensor, and \( i = 0, \ldots ,{\text{N}} \) means marker identification number. Figure 2 shows the markers used for our calibration. Eight markers (\( {\text{N}} = 8 \)) are used in this experiment. They are printed on adhesive printer sheets and attached to acrylic boxes. By solving the following Eq. (1), the calibration matrix \( {\mathbf{M}} \) is calculated.

$$ {\mathbf{M}} = \left( { \begin{array}{*{20}c} {\varvec{p}_{1}^{\text{kinect}} } & \cdots & {\varvec{p}_{\text{N}}^{\text{kinect}} } \\ \end{array} } \right)\left( { \begin{array}{*{20}c} {\varvec{p}_{1}^{\text{MT}} } & \cdots & {\varvec{p}_{\text{N}}^{\text{MT}} } \\ \end{array} } \right)^{ - 1} $$
(1)

To acquire a proper and precise matrix, all markers should not be placed in the same plane and \( n \) should be a large number. Marker sizes used in this experiment are H30 × W50 mm and H40 × W50 mm.

Fig. 2.
figure 2figure 2

Markers for depth camera calibration

4 Estimation of Knife Tip Position

It is difficult to measure the knife tip position directly and without contact during the operation because it gets covered in blood and is hidden in the incised portion of the skin or organ. Therefore, we use a marker attached to the top of the knife. Figure 3 shows a real electrosurgical knife (a) and a model knife with the markers that we are developing. To accurately and robustly track the knife, many markers are placed in each direction.

Fig. 3.
figure 3figure 3

Knives with markers

Before the operation, the relative vector between each marker and the tip has to be measured. To acquire a relative vector, one must set the tip of the knife to the origin point \( \varvec{p}_{\text{table}}^{\text{MT}} \) of the fix marker \( {\text{C}}_{\text{table}} \) on the flat table (Fig. 4). \( \varvec{p}_{\text{table}}^{\text{MT}} \) is measured by MicronTracker3. The position \( \varvec{p}_{\text{knife}}^{\text{MT}} \) and orientation \( {\mathbf{R}}_{\text{knife}}^{\text{MT}} \) of the marker attached to the knife \( {\text{C}}_{\text{knife}} \) are measured in \( \Sigma_{\text{MT}} \) and are also measured by MicronTracker3. \( \Sigma_{\text{MT}} \) and \( \Sigma_{\text{knife}} \) are the coordinate systems of MicronTracker3 and knife, respectively. The relative vector \( \varvec{p}_{\text{rel}}^{\text{knife}} \) is calculated by

Fig. 4.
figure 4figure 4

Measuring relative vectors

$$ \varvec{p}_{\text{rel}}^{\text{knife}} = \varvec{p}_{\text{table}}^{\text{MT}} - \varvec{p}_{\text{knife}}^{\text{MT}} $$
(2)

in \( \Sigma_{\text{knife}} \). To convert \( \varvec{p}_{\text{rel}}^{\text{knife}} \) to \( \varvec{p}_{\text{rel}}^{\text{MT}} \) in \( \Sigma_{\text{MT}} \),

$$ \varvec{p}_{\text{rel}}^{\text{MT}} = {\mathbf{R}}_{\text{knife}}^{{{\text{MT}} - 1}} \cdot \varvec{p}_{\text{rel}}^{\text{knife}} $$
(3)

Therefore, the knife tip position \( \varvec{p}_{\text{tip}}^{\text{MT}} \) is calculated by

$$ \varvec{p}_{\text{tip}}^{\text{MT}} = {\mathbf{R}}_{\text{knife}}^{{ ' {\text{MT}}}} \cdot \varvec{p}_{\text{rel}}^{\text{knife}} + \varvec{p}_{\text{knife}}^{{ ' {\text{MT}}}} $$
(4)

where \( \varvec{p}_{\text{knife}}^{{ ' {\text{MT}}}} \) and \( {\mathbf{R}}_{\text{knife}}^{{ ' {\text{MT}}}} \) mean the position and orientation of \( {\text{C}}_{\text{knife}} \) during the operation.

A point \( \varvec{p}^{\text{MT}} \) which is measured in \( \Sigma_{\text{MT}} \) can be converted to a point \( \varvec{p}^{\text{kinect}} \) in the coordination system of the KINECT \( \Sigma_{\text{kinect}} \) by using the calibration matrix \( {\mathbf{M}} \) derived in the previous section.

$$ \varvec{p}^{\text{kinect}} = {\mathbf{M}} \cdot \varvec{p}^{\text{MT}} $$
(5)

5 Experiment and Results

By combining the above calculations, we implemented a pilot study system and conducted preliminary experiments (Figs. 5 and 6). Before we began, we used a 3D printer to make a life-size model of a liver with a red-colored material. The original data used to create the model liver was taken from a CT (computed tomography) image from a patient. The model liver was placed on a table and two depth sensors were mounted above the table and directed downward to capture it.

Fig. 5.
figure 5figure 5

Experimental setup (Color figure online)

Fig. 6.
figure 6figure 6

Knife model with markers and liver model made using a 3D printer (Color figure online)

This system has the ability to give visual and audio warnings. As the tip of the knife approaches to the liver, and the color of the knife tip on the monitor changes from green to red (Fig. 7) and the sound frequency increases. These experimental results showed the feasibility of our system.

Fig. 7.
figure 7figure 7

Visual alert based on the distance between liver surface and the knife tip. The tip color of the model knife changes from green to red depending on the distance (Color figure online).