CIGuide: in situ augmented reality laser guidance

Purpose  A robotic intraoperative laser guidance system with hybrid optic-magnetic tracking for skull base surgery is presented. It provides in situ augmented reality guidance for microscopic interventions at the lateral skull base with minimal mental and workload overhead on surgeons working without a monitor and dedicated pointing tools. Methods  Three components were developed: a registration tool (Rhinospider), a hybrid magneto-optic-tracked robotic feedback control scheme and a modified robotic end-effector. Rhinospider optimizes registration of patient and preoperative CT data by excluding user errors in fiducial localization with magnetic tracking. The hybrid controller uses an integrated microscope HD camera for robotic control with a guidance beam shining on a dual plate setup avoiding magnetic field distortions. A robotic needle insertion platform (iSYS Medizintechnik GmbH, Austria) was modified to position a laser beam with high precision in a surgical scene compatible to microscopic surgery. Results  System accuracy was evaluated quantitatively at various target positions on a phantom. The accuracy found is 1.2 mm ± 0.5 mm. Errors are primarily due to magnetic tracking. This application accuracy seems suitable for most surgical procedures in the lateral skull base. The system was evaluated quantitatively during a mastoidectomy of an anatomic head specimen and was judged useful by the surgeon. Conclusion  A hybrid robotic laser guidance system with direct visual feedback is proposed for navigated drilling and intraoperative structure localization. The system provides visual cues directly on/in the patient anatomy, reducing the standard limitations of AR visualizations like depth perception. The custom- built end-effector for the iSYS robot is transparent to using surgical microscopes and compatible with magnetic tracking. The cadaver experiment showed that guidance was accurate and that the end-effector is unobtrusive. This laser guidance has potential to aid the surgeon in finding the optimal mastoidectomy trajectory in more difficult interventions.


Introduction
Navigated surgery in the lateral skull base is technologically and surgically challenging due to the complexity and smallness of surgical targets of the temporal bone (like the round window of the cochlea). Surgical microscopes are standardly used in ENT (Ear, Nose and Throat) surgeries, and navigated surgical stereo microscopes are rare [1], due to tedious setups and rather large impact on the surgical workflow. Further-more, clinical acceptance is limited by the need of additional staff to run the navigation equipment. On top, precision optical tracking frequently suffers from obstructed lines of sight between camera and tracked rigid bodies [2].
Our own 25+ years of experience shows that standard pointer and monitor navigation in the lateral skull base is more desirable than a navigated microscope. An additional display for navigation poses a major mental overhead for the surgeon. Navigating with magnetic tracking might be compatible with surgical stereo microscopes [3] without an extra monitor. Moreover, experience has pointed out the need for a minimal user interface showing only the pertinent information directly in the intraoperative scene. We present a system for visualizing information to the surgeon intraoperatively without a monitor and without dedicated pointing tools.
Such a system can provide minimal information with maximum relevance to the surgeon directly in the anatomy: access path, hit / miss of the target. Similar approaches are known, the ARSys Tricorder [4], or the Probaris system [5], none of which have found widespread use in daily surgical routine, presumably due to the additional constraints introduced to surgical workflow (e.g., eye calibration or wearing AR glasses intraoperatively). When considering attention shift [6,7] and depth-perception [8], Spatial AR systems have an advantage over conventional displays or seethrough AR systems [9][10][11]. More recently, microscope and instrument-mounted projectors have been used to provide spatial augmented reality guidance with image projection [12][13][14], or instrument-mounted displays [15] to ease instrument alignment.
Our approach ("CIGuide", Cochlear Implant Guide) projects a laser beam aligned to the surgical access path and the target in the anatomy as visual cues directly in the surgical area [16][17][18]. Neither extra workflow constraints nor mental loads are placed on the surgeon, giving surgically acceptable accuracy with magnetic tracking.

Methods
In this section, we describe the hardware and software components of the prototype guidance system, the proposed workflow and the CIGuide implementation.

Custom end-effector
For laser guidance while using a surgical microscope, a custom-bulit end-effector (Fig. 2, (ACMIT GmbH, Wiener Neustadt, built by Sistro GmbH, Hall i. T., both in Austria) replaces the standard needle guide extension (NGE) of the iSYS-1 robot.
It features robust laser beam guidance positioning and orienting, has minimal magnetic field distortion, (medical grade-1 titanium) and does not affect the microscope's field of view.
The end-effector is designed for sterilization. In a real surgical setting, all but the transparent mirror can be covered in a sterile sleeve to maintain sterility.
Rhinospider is inserted into the nasopharynx prior to preoperative imaging and serves as a fiducial set both for imaging and magnetic tracking and stays in place until the end of surgery. Its design allows both automated localization in CT images and during tracking. A fully automated workflow eliminates user errors and allows high-accuracy registrations potentially with submillimetric target errors.

Dual plate
The dual plate controller (DPC) aligns the guidance laser with the planned optimal access trajectory in the preoperative CT imagery without affecting the magnetic field. A two-step hybrid magnetic-optical control scheme targeting starts with EM tracking, followed by optical augmented reality tracking in the microscope's camera view. The DPC components ( Fig. 4) enable laser tracking by observing its reflections. It is built from a transparent acrylic glass (100 mm×130 mm×2 mm) upper plate and an opaque lower plate from PEEK (100 mm×130 mm×5 mm) engraved with a 10 x 6 checkerboard pattern (10 mm × 10 mm square size, Lex Feinmechanik GmbH, Grabenstätt, Germany), forming an optical reference frame. The reflections on the plates uniquely determine the laser axis in the tracker volume, allowing optimal alignment with the preoperatively planned axis Four 5D Aurora sensors at well-defined asymmetric positions relative to the checkerboard pattern form a second reference frame. Both can be registered uniquely (T t, p ), (see Fig. 5). The checkerboard establishes the 3D to 2D projection between pattern coordinate system and camera view T Pn P , a 3D to 2D homography between pattern reference and image frames [22] which can then be decomposed into translation, rotation and projection.    where T proj is the camera projection transform (including nonlinear distortions) and T t,r is the rigid body transformation between the pattern and camera space [23,24].
For fixed plate and microscope positions, T Pn P •T t, p bidirectionally connects microscope view and tracker coordinate frames.

CIGuide software
A prototype system (CIGuide) featuring planning, intraoperative navigation and robot control was built. Preoperative planning was implemented as a Slicer [25] module, the rest as a separate set of modules [26], based on open-source libraries [27][28][29][30][31]. This planning module is used to transfer the surgical access path as intended by the surgeon to the intraoperative surgical scene. Entry point, target and path, respectively, define the intraoperative path to be visualized by the laser and followed by the surgeon.

Workflow
The most important steps of the workflow are shown in (Fig. 6).

Intraoperative registration with Rhinospider
Intraoperative registration [32] between patient (Rhinospider) and patient's radiological data requires corresponding pairs of image fiducials and tracker sensors. Rhinospider sensor balls in CT data are detected automatically with a GPU-accelerated (OpenCL, ITK) method [18]. Fifty temporally consistent sensor locations readings are averaged, while the patient maintains a fixed position relative to the tracker. Sensors and fiducials are paired by finding the permutations with minimum fiducial registration error [33]. This registration workflow with standard radiological CT imagery (0.75-mm slice thickness) can reach submillimetric application accuracy in regions close to the sensors such as the cerebello-pontine angle or the lateral skull base [21].

Target designation
At the "Initial Robot Positioning" step, the operator positions the iSYS-1 robot base within the scene and fixes it to the operating table. The laser (viz., the end-effector) is manually positioned at the planned position by directly observing the guidance beam on the patient. The robot has a limited working volume and should have a roughly optimal pose to successfully reach the planned trajectory during robot control. Once fixed, a few reference movements suffice to decide whether the target location is reachable or not. If not, the robot position needs to be improved.
Next, the DPC is placed on the patient and target designation starts: As an aid, a live AR view shows the calculated reflections on both plates where the guidance laser should hit both plates (Fig. 4).

Robot control
Next, a two-step iterative closed loop feedback controller based on visual feedback moves the robot to the desired location [34]. The two main steps executed by the controller are:

Reference Movement
Step: based on visual feedback, the end-effector is moved to a few predefined locations to decide whether a target is reachable from the current position. If not, the robot must be repositioned.

Iterative Refinement
Step: based on the difference between the observed and desired locations of the reflections, the robot reduces feedback error with a correctional move. This is iterated until the feedback error reaches a predefined threshold, or the maximum number of iterations is reached.

Refraction correction
Refraction on the upper plate requires correcting the measured positions on the lower plate before it can be used to determine the true location/orientation of the beam in space (Snell's law) [18].

Experimental setup
The accuracy of the proposed system was evaluated on a plastic skull phantom with inserted Rhinospider sensors and titanium screws at various locations. For each location, a target axis was defined in the preoperative planning module (Fig. 7). The plastic skull was then positioned randomly in the optimal working volume of the Aurora device so as to allow viewing it with the microscope. To compensate for illumination losses in the stereo microscope, an external high-resolution camera (uEye UI-1245LE, IDS GmbH, Tamron 13VM550ASII CCTV optics, Tamron Europe GmbH, both Germany) was utilized to observe the dual plate. A small, plastic target plate of 10 mm× 10 mm was designed to fit the target screw head. (Fig. 8). The plate was 3D printed with a cross-shaped indentation to fit the head of the target screw. During the evaluation runs, the plate was put on top of each target to provide a reference plane.

Evaluation procedure
The whole procedure was repeated ten times for each of the five different targets. At each iteration, two images were captured with different exposure times: one with the laser off (reference image) where the cross indentation is clearly visible, and a measurement image with a short exposure time and with the laser turned on. The measurement image was thresholded, eroded into a few pixels to show the beam's center and then overlaid as a white pixel layer on the image with normal exposure. For each image target position, the center of the laser spot, top, bottom, left and right endpoints of the engraved cross was marked up manually and used to reconstruct the millimetric displacement error between the center of the cross and the center of the laser spot. The distance of the spot to the target was directly measured on the camera images that were calibrated and undistorted. The center and corner of the target plate and the laser spot center were manually annotated, and the millimeter distance was estimated from the pixel distance using the camera calibration.

Predicting guidance uncertainties at the target
The uncertainty of the approach at a given target including the dual plate location was estimated with a Monte Carlo simulation of probable guidance trajectories, based on [18,33,35]. The prediction method is also used to visualize the expected uncertainty of a given setup before the actual robot control was executed (Fig. 9). After experimental evaluation, the validity of these predictions was checked against the measurements.

Cadaver experiment
For evaluating, the guidance system was used for guidance in a cadaver mastoidectomy. First, removing the mandible and the cranial remains of the neck was from an anatomic specimen gave access to the choana from posterior. A Rhinospider assembly with a 6D DRF was fixed (superglue) to the nasal mucosa (Fig. 10). Standard clinical cranial CT images (1-mm slice thickness) were created. During preoperative planning, the head of the incus, which can easily and precisely be identified in patient and radiological imagery, was designated as a target.  Patient registration was repeated four times with slightly different orientations relative to the magnetic field emitter. The RMS FRE error was 0.84 ± 0.13 mm. The registration was qualitatively validated by touching the implanted 2mm titanium screws with the navigation system's probe and checking the error in the CT images (Fig. 11). After successful validation of the registration, the surgeon performed the drilling steps of mastoidectomy with the microscope while the guidance beam was active.

Results
The resulting target accuracies during the quantitative evaluation on the plastic skull are presented in Table 1. The system is able to reach an average target accuracy error of 1.2 mm Fig. 11 Partial screenshot of the navigation while the accuracy of the registration on the cadaver was qualitatively evaluated by localizing the implanted screws with the navigation probe ± 0.5 mm, which is close to the limit achievable with the magnetic tracking system in use. The overall predicted target standard deviation of the target accuracy for the tested plate locations was ± 0.52 mm, which corresponds nicely to the measured experimental uncertainty.
System setup including control iterations after patient registration added approximately 15 minutes to the intraoperative workflow. On the cadaver, the target accuracy at the incus bone was subjectively evaluated by the surgeon and the assistant after the drilling step (Fig. 12). The accuracy of the guidance beam at the end of the experiment was estimated to be 2 millimeters. Overall, the surgeon stated that the guidance was not obstructing the view during the mastoidectomy and that it can be helpful for more complicated cases.

Discussion and conclusions
It is concluded that the magnetic tracking offers an easier approach to intraoperative tracking and user error-free registration without uninterrupted line-of-sight requirements. Sensors are small enough to be positioned close to the Fig. 12 Screenshot captured from the live view of the surgeon through the microscope with the CIGuide guidance beam in place. On the top left side is the ear canal, with the malleus visible in the middle ear cavity, after moving the eardrum. To the right of the posterior wall of the outer ear canal is the mastoid cavity, with the guidance beam reflecting from the head of the incus. A partial reflection of the beam is also visible on the posterior wall. After targeting and drilling, the error of the guidance beam relative to the preoperatively planned location (marked by the green arrow) was approximated to be 2 millimeters relevant surgical structures inside the body. Rhinospider technology is similar to nasal stents (e.g., http://www.alaxo.com/ alaxolito_eng.html, Alaxo, Germany) that patients very well tolerate easing nasal breathing in case of nasal congestion and for sports.
Once registration and plate position were determined, a feedback controller utilizes HD camera tracking for robotic laser beam alignment. So the robotic platform inside the magnetic field does not need to be magnetically tracked, which makes the robot design easier. Optical tracking is far more accurate and allows positioning the robot platform far off the region of interest, or even outside the working volume of the tracker.
This hybrid tracking approach enables direct tracking of the guidance laser beam, resulting in significantly better target accuracy than direct magnetic tracking of the robotic end-effector itself. Other designs of control plate and sensor mountings to it could somewhat further reduce the target error. Further phantom experiments with anatomic specimens and surgeons performing "real" interventions are planned to determine the system's behavior under more realistic conditions.
The system shows a promising potential in our initial tests to be a laser guidance platform that is easy to use, is built from standard elements and can be utilized during surgery without major additional workload on the medical personnel. The system allows in situ visualization of information with a fairly small impact on the surgeon's mental workload and can easily be integrated into existing operating theatres and workflows.
The CIGuide system as presented builds on Rhinospider technology applied to the nasopharynx. This is no limitation for microscopic interventions at the lateral skull base of all kinds. The insertion of the short-term (less than one day) registration device in the nasopharynx has shown promising guidance in our laboratory investigation. Endoscopic interventions at the anterior skull base, the pituitary, or beyond, are not intended as surgeries to benefit form this technology. The authors are aware of the limits of using one laser beam as a guidance aid. Preliminary work with a second laser to encode the spatial target position as an intuitive surgical visualization is under way; due to the complexity of the issue, it is foreseen to be published separately.