A haptics-assisted cranio-maxillofacial surgery planning system for restoring skeletal anatomy in complex trauma cases
Cranio-maxillofacial (CMF) surgery to restore normal skeletal anatomy in patients with serious trauma to the face can be both complex and time-consuming. But it is generally accepted that careful pre-operative planning leads to a better outcome with a higher degree of function and reduced morbidity in addition to reduced time in the operating room. However, today’s surgery planning systems are primitive, relying mostly on the user’s ability to plan complex tasks with a two-dimensional graphical interface.
A system for planning the restoration of skeletal anatomy in facial trauma patients using a virtual model derived from patient-specific CT data. The system combines stereo visualization with six degrees-of-freedom, high-fidelity haptic feedback that enables analysis, planning, and preoperative testing of alternative solutions for restoring bone fragments to their proper positions. The stereo display provides accurate visual spatial perception, and the haptics system provides intuitive haptic feedback when bone fragments are in contact as well as six degrees-of-freedom attraction forces for precise bone fragment alignment.
A senior surgeon without prior experience of the system received 45 min of system training. Following the training session, he completed a virtual reconstruction in 22 min of a complex mandibular fracture with an adequately reduced result.
Preliminary testing with one surgeon indicates that our surgery planning system, which combines stereo visualization with sophisticated haptics, has the potential to become a powerful tool for CMF surgery planning. With little training, it allows a surgeon to complete a complex plan in a short amount of time.
KeywordsCranio-maxillofacial surgery planning Haptics Stereo visualization Collision detection Snap-to-fit
Introduction and related work
One fundamental task in cranio-maxillofacial (CMF) surgery is to restore normal skeletal anatomy in patients with extensive fractures of the facial skeleton and mandible from gunshot wounds, work-related injuries, natural disasters, or traffic accidents. Any attempt to restore a bone fragment to its original position poses considerable risk for additional damage to vital anatomical structures. Furthermore, small errors in the positioning of each fragment may accumulate and result in inadequate reconstruction which in turn may result in poor function and a poor esthetic result.
Surgical planning from CT data may improve the surgical outcome and reduce the time in the operating room. But if the planning relies only on visual cues, object contact and object penetration can be difficult to discern because contact surfaces are likely to be occluded by the many bone fragments. Current commercially available CMF surgery planning systems, for example systems by Planmeca , Materialise , and Brainlab , rely primarily on two-dimensional graphical interfaces. These put great demands on the user, requiring that he or she be able to visualize complex 3D models from their 2D projections on a two-dimensional display and be able to plan delicate 3D procedures using a set of 2D projections and 2D interaction tools. Furthermore, while a surgeon relies heavily on his/her sense of touch in the operation room, surgical planning systems generally do not use the sense of touch to complement the visual interface. As a result, CMF surgery planning is time-consuming and cumbersome, making it difficult to find an optimal surgical strategy, which in turn may result in a less than perfect reconstruction with needless patient discomfort and loss of functionality.
Several research groups have developed systems for CMF surgery planning. Essig et al.  and Rana et al.  describe interactive computer-assisted planning and surgery tools using photorealistic imaging for optimized treatment of oral and maxillofacial malignancies, and for tissue engineering of bone.
Juergens et al. [6, 7] describe planning tools that include skull and soft tissue segmentation, assessment of skeletal muscle properties, characterization of the mechanical response of soft facial tissue, clinical validation, and transfer of the CMF planning into the operating room. However, haptics was not explored as an interaction modality in these systems.
Haptics has the potential to improve surgical planning by giving the surgeon virtual tools that are familiar from the operating room: s/he can feel if two bone fragments fit together or if the occlusion (bite) is correct. Contact forces also help the surgeon to avoid interpenetration of fragments that may be difficult to discern visually. Forsslund et al.  present a requirements study for CMF surgery planning with haptic interaction for bone fragment and plate alignment, exploring what features might be important in haptic cranio-maxillofacial planning. This is done with physical mock-ups, complemented by the implementation of some features in software. They mention “haptic fidelity” as a highly important aspect for success in this type of system.
Haptic feedback is used to increase the realism in simulators for training of specific surgical procedures. Pettersson et al.  present a simulator for cervical hip fracture surgery training which provides visuo-haptic feedback of the drilling task central to this procedure. Morris et al.  describe a bone surgery training simulator also with focus on drilling, in this case of the temporal bone and the mandible. This last simulator provides audio feedback in addition to the visual and haptic feedback. A survey of visuo-haptic systems for surgical training with a focus on laparoscopic surgery can be found in .
We present a system that combines stereoscopic 3D visualization with six-DOF haptic rendering that can be used by a surgeon with only minimal training. The system features a head tracker to enable user “look-around” in the graphical scene, a simulated spring coupling between the manipulated virtual bone fragment and the haptic handle for enhanced haptic stability, high-precision collision detection, the ability to group and manipulate a set of fragments as one entity, and Snap-to-fit, a tool for precision alignment of matching bone fragments.
The patient data comprise segmented volumetric CT data from the fractured regions in which independent bone fragments are labeled. (See section “Image data handling.”) A half-transparent mirror with stereo glasses gives the user a stereoscopic view of the data, and the haptic unit, positioned under the mirror, has a handle for moving the entire CT model or individual bone fragments. (See Fig. 1.)
A head tracker, which continually updates the user’s vantage point, gives the user “look-around,” that is the ability to view objects from different angles by simply moving his/her head. This is essential for detecting bone fragments that may be (partially) occluded from certain vantage points.
During fragment manipulation, contact force and torque from contacts with other fragments are rendered haptically with high spatial resolution, giving the user an impression similar to that of manipulating a real, physical object around other objects. To limit inter-object penetrations, we simulate a translational and a rotational spring, commonly known as virtual coupling, between the bone fragment currently under manipulation and the haptic handle. The user may push a manipulated bone fragment toward another bone fragment which stretches the simulated spring, but the manipulated fragment stops at the other fragment’s surface instead of penetrating it. This increases the stability of the haptic interaction dramatically .
When two or more fragments have been positioned relative to one another, the user may group them and manipulate them as one unit. Additional fragments may subsequently be attached to extend the group and they may also be detached from the group. When bone fragments are grouped, the entire group is given one color. The grouping tool is activated with pushbuttons on the 3DConnexion unit placed to the left under the half-transparent mirror. (See Fig. 1.)
In what follows, we describe in more detail the unique feature Snap-to-fit, which complements the contact forces to aid the user in bone fragment alignment.
Snap-to-fit, a complement to contact forces
The alignment tool, Snap-to-fit, complements haptic contact forces in search for a good fit between two bone fragments. For a detailed description of Snap-to-fit, we refer the reader to . In summary, the user begins by moving a bone fragment close to a matching fracture surface on another bone fragment. From this approximate initial position of the two fragments, the user activates Snap-to-fit with the foot-switch (shown in Fig. 1) that engages attraction forces computed from the fracture surfaces. The forces pull the manipulated objects toward the closest stable fit, that is, it “snaps” the fragments to a local stable fit (see Fig. 4). We scale the attraction forces by the similarity of the fracture surfaces, computed by the colinearity of the surface normals. Fragments with matching surfaces have stronger attraction forces than those with less similar surfaces.
One limitation in the original implementation of Snap-to-fit is that the fragments may “snap” to several alternative positions, since the whole fragment surface is a potential matching surface. We therefore extend the method described in  with the ability to mark portions of fragment surfaces as fracture surfaces, allowing the user to “paint” the fracture surfaces with the haptic cursor (see Fig. 4). Only painted surfaces are included in the attraction force model, which prevents the fragment from snapping to false regions outside the fragment surface area.
Hardware and implementation details
Our planning system executes on an HP Z400 Workstation with an Nvidia Quadro 4000 Graphics Processing Unit (GPU) driving a Samsung 120 Hz stereo monitor which displays time-multiplexed stereo graphics at a resolution of \(1,680\times 1,050\) synchronized with a pair of Nvidia 3D Vision Pro  shutter glasses. The half-transparent mirror rig used for visuo-haptic collocation is manufactured by SenseGraphics . The head tracker is based on an IR optical tracker (OptiTrack from Natural Point ), with built-in motion capture and image processing, that optically tracks a marker rig consisting of four IR-reflecting spherical markers placed asymmetrically on the stereo glasses worn by the user. After careful registration of the tracking frame with the visual frame, the head tracker estimates the user vantage point from which we render the stereo perspective.
We render the bone fragment surfaces on the visual display using splatting  which is implemented on the GPU to achieve real-time rendering.
We use a Sensable Phantom Premium 1.5 High Force/6DOF haptic device , with six-DOF (in/out) running at a haptic frame rate of 1 kHz. We render the six-DOF contact forces using a rigid body contact model combined with a virtual spring (static virtual coupling) which decouples the manipulated bone fragment position and orientation from the haptic handle to improve the haptic stability . We rely heavily on pre-computation and hierarchical data-structures in order to achieve real-time haptic interaction rates . The contact force model and the static virtual coupling are detailed in . Snap-to-fit is implemented according to  with the extension that the user may mark fracture surface areas. Only marked surface areas are included in the attraction force model.
Image data handling
We load the patient-specific volumetric image data from a DICOM stack of CT images. The images in this study have a resolution of 0.35 mm, and the inter-slice distance is 0.60 mm. We first segment bone tissue from soft tissue by thresholding the CT data and then remove small, isolated tissue components with fewer than 100 connected voxels using the bwareaopen filter in MATLAB. We manually segment and label individual bone fragments from the resulting image volume using ITK-Snap  before loading them into the planning system.
A 59-year-old male had sustained major trauma to his left maxillofacial region resulting in a major defect of the horizontal and vertical part of the mandible as well as a fracture of the zygomatic bone with moderate displacement of the total zygomatic complex without any fragmentation. (See Fig. 5.) The training session involved using the features of the system, including Snap-to-fit, in order to position the zygomatic complex correctly. To position the zygoma, the surgeon marked the full fracture surface on the zygomatic bone and its corresponding fracture surface on the cranium. He then moved the bone fragment into an approximate initial position and activated Snap-to-fit which produced the result shown in Fig. 5.
Evaluation results and observations
The surgeon completed the reconstruction shown in Fig. 6 in 22 min after 45 min of training on the practice case. The fractures in the mandible are adequately reduced. It was not possible to obtain perfect occlusion due to inference from dislocated teeth; a tool to remove unwanted parts could be useful in such situations. The surgeon made extensive use of the grouping tool to build groups of fragments once he found a good fit. He also used the head tracking feature more and more throughout the session to look around objects instead of relying on rotation to get good visibility. He noted that he could perceive haptically when a bone fragment under manipulation did not fit due to misplacement, or due to inadequate reconstruction of previously positioned fragments. He also commented that the system is useful for understanding the complexity of the specific case, and that he during the planning process gained insights on preferred order of fragment placement; assembling the fragments in a certain order may provide valuable clues toward the best global reconstruction. The surgeon did not favor Snap-to-fit in this case after trying it on some fragments that were too small to give a robust result. The surgeon relied, therefore, on contact forces and visual inspection to complete the reconstruction.
Similar to the success of haptics in surgery training simulators [9, 10, 11], we believe that haptics can greatly improve the efficacy of CMF surgery planning software. In order to produce a complete planning tool, we need to add a number of features. A robust automatic method to find an initial segmentation complemented by interactive segmentation to remove unwanted objects during a planning session, such as the dislocated teeth in the evaluation case, would be of high value. Future work also includes the virtual design of reconstruction plates for additive production prior to surgery  and to explore ways to transfer the reconstruction plan to the operating room. There is also a need for a function which allows shaping and fitting of bone grafts or biomaterial to repair defects acquired from trauma. Finally, a thorough evaluation with several surgeons including a comparison with existing CMF planning software packages is needed to establish the efficacy of our system.
We have described work in progress on a system that supports the planning of skeletal anatomy restoration in complex trauma cases. The key features are as follows: stereo graphics and head tracking that enables “look-around” to allow the user to graphically view the patient-specific anatomy in 3D from different angles by simply moving his/her head; stable six-DOF high-precision haptic rendering that provides intuitive guidance when manipulating virtual bone fragments, allowing the user to feel when one fragment is in contact with another; and Snap-to-fit that complements the contact forces with attraction forces to aid the precise placement of larger fragments such as the zygomatic bone. Grouping allows several bone fragments to be manipulated as one entity. Preliminary testing with one surgeon indicates that our haptic planning system has the potential to become a powerful tool that with little training allows a surgeon to complete a complex CMF surgery plan in a short amount of time.
We wish to thank Dr. Andreas Thor, Dept. of Surgical Sciences, Oral and Maxillofacial Surgery, Uppsala University, for taking part in the evaluation of the system. We also wish to thank our funding agencies Knowledge Foundation, VINNOVA, SSF, ISA, and the Vårdal Foundation for their generous support. The Ethical Review Board in Uppsala approved the use of the patient image data for this study, application 2012/269. We received the informed consent before the study from either the patient or the next of kin.
Conflict of Interest
The authors declare that they have no conflict of interest.
- 4.Essig H, Rana M, Kokemueller H, von See C, Ruecker M, Tavassol F, Gellrich NC (2011) Pre-operative planning for mandibular reconstruction—a full digital planning workflow resulting in a patient specific reconstruction. J Head Neck Oncol 3:45Google Scholar
- 5.Rana M, Essig H, Eckardt AM, Tavassol F, Ruecker M, Schramm A, Gellrich NC (2012) Advances and innovations in computer-assisted head and neck oncologic surgery. J Craniofac Surg 23(1): 272–278Google Scholar
- 8.Forsslund J, Chan S, Salisbury JK, Silva R, Girod SC (2012) Design and implementation of a maxillofacial surgery rehearsal environment with haptic interaction for bone fragment and plate alignment. CARS extended abstract Google Scholar
- 10.Morris D, Sewell C, Barbagli F, Blevins NH, Girod S, Salisbury K (2006) Visuohaptic simulation of bone surgery for training and evaluation. IEEE Comput Graph Appl 26(4):48–57Google Scholar
- 11.Hamza-Lup FG, Bogdan CM, Popovici DM, Costea OD (2011) A survey of visuo-haptic simulation in surgical training. In: Proceedings of the international conference of mobile, hybrid and on-line learning, pp 57–62Google Scholar
- 12.Wan M, McNeely WA (2003) Quasi-static approximation for 6 degrees-of-freedom haptic rendering. In: Proceedings of the IEEE visualization 2003. Seattle, WA, pp 257–262Google Scholar
- 13.Olsson P, Nysjö F, Hirsch J, Carlbom I (2013) Snap-to-fit, a haptic 6 DOF alignment tool for virtual assembly. In: Proceedings of the world haptics conference, Daejeon, KoreaGoogle Scholar
- 17.Westover L (1990) Footprint evaluation for volume rendering. In: Proceedings of the ACM SIGGRAPH, pp 367–376Google Scholar
- 19.Barbic J (2007) Real-time reduced large-deformation models and distributed contact for computer graphics and haptics. Dissertation, Computer Science Dept., Carnegie Mellon Univ., Aug 2007Google Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.