• Augmented reality is a new navigation that allows the superposition of clinical information in the sight of the surgeon.

  • The planning and performing of patient-specific joint replacement techniques, but also the training for orthopaedic surgeries will benefit from the augmented reality systems.

  • Augmented reality platform availability is related to improvement in computing power capacity, tracking system precision and environment understanding algorithm development.

This chapter aims to introduce some elements of reflection regarding the use of an augmented reality platform for patient-specific hip and knee joint arthroplasties. The chapter is composed of three main parts. The first part proposes a brief presentation of the ‘reality concept’ and the recent technological progresses that have made this new technology ready for use in the OR. The second part describes the augmented reality process and addresses the technological bottlenecks, which still need to be addressed. The last part provides several case examples, using such technology in the OR.

1 From Reality to Augmented Reality

Surgeons are used to interacting with their surgical environment. During the procedure, surgeons use all their senses to perform complex tasks at an optimal standard. However, visual, touch and sound sensation are the ones most readily relied on during their practice. Their aptitude to assess the situation with their senses have been learned and refined through their medical training, and also with the experience acquired during clinical practice.

In recent decades, digital technologies used in augmented reality have been developed in order to interact with the human senses. These technologies enable user projection into a reality described through a digital memory. This “digital” reality is then rendered to human sense through a digital interface. The interface mediums include an image for the eyes, a sound for the ears and pressure for the touch. When the digital system is able to measure the action of the user, it can then alter the digital reality and represent its new version to the user.

Except for the touch sense, these technologies have been widely used for computer games particularly with “First-Person Shooter” games where the picture is rendered on a screen. This case uses non-immersive virtual reality as the user is still connected with part of their reality. The recent development of immersive helmets allows one to cut the user from their reality and expose the user to a stereoscopic display. In this case, each eye has its own display, and the computer renders a picture from a different position for each eye. The stereoscopic perception of the user allows a full 3D immersion in the virtual environment. Further to this, the augmented reality headset uses stereoscopic projection of virtual objects on the real environment of the user.

Since their inception, digital technologies used in augmented reality have aimed to alter the user’s senses. Indeed, digital systems are able to handle information representing an object, a scene or some information about the scene, which creates a sort of reality. Several products have been developed that allow such interfaces to deliver virtual information to the user’s senses. The major senses that have been targeted are that of vision and sound. Screens and audio devices were the first to help the digital system in better presenting virtual information to the user. Touch sense has also been addressed through the development of haptic devices. But this sense is particularly difficult as it involves delivery of mechanical forces which are difficult to produce with wearable devices. The augmented reality concept, which aims at projecting information from a virtual reality into the user’s reality, needs to understand the user reality and to render the virtual objects properly.

Information usability is intimately related to the quality of the information and its presentation. For example, it would be difficult to prepare a plan on a low-quality image display and with a low precision bone surface. This has been a technological bottleneck for several years. In the last decades, the digital world has improved in both technology and availability. Indeed, technological progress introduces higher computing power, better output interface and smaller devices. The computing power increased in an exponential way. The last super computer Tianhe-2 is 2.73 × 1012 times more powerful than the IBM 704 released in 1954.

Following the increase in computing power, availability improved as well. For example, the Cray-2 released in 1985 and the iPhone 6 have similar computing power. As shown in Table 27.1, the technological advances allow us nowadays to transport and easily use the device. The device has also become more available, with less than a hundred of Cray-2 sold, whilst over 220 million iPhone 6 devices have been delivered. The level of expertise is also an important factor as to the use of Cray-2 device, requires a computer scientist or an engineer, whilst almost anyone can operate an iPhone 6. A second key factor is also cost. The Cray-2 price was US$32 million versus the US$649 for the iPhone 6. Improvements in display devices and computing power for image rendering have driven further advancements in picture quality and better bridged the fidelity to the reality (Fig. 27.1). The haptic feeling is a particular part which is associated with the robotic domain.

Table 27.1 Difference of power consumption, weight and price for the Cray 2 and the iPhone 6
Fig. 27.1
figure 1

The tryptic of images produced by virtual reality platforms show the evolution of the rendering quality along the years, which increase the realism perception level by the user. Image 1994, image 2002, image 2019 (images, adapted from [9,10,11])

This short comparison helps to figure out the important improvement of the digital technologies, which propelled the recent development of augmented reality into the clinical domain. Surgical outcomes can be improved with better planning and the availability of implant positioning parameters that are visible in real time. Augmented reality is one potential technology that could allow one to present additional information to the clinician in order to assist during the surgery.

2 How Does Augmented Reality Works?

Augmented reality technologies aim to introduce virtual elements into the user’s environment. The information used needs to be related with the reality, meaning that the system will need to measure and understand the user reality, process it to compute the information required, and then render it to project this information to the user in correlation with the reality. For that purpose, patient position as well as the relative positions of equipment like tools and display devices is a crucial element to measure and it is mandatory in order to compute feedback information.

2.1 Tracking

To measure and track the objects position, several technologies have been proposed. This could be done with three classes of measuring methods: with contact, semi-contact, and contactless. This depends on link between the object and the measuring equipment.

With the contact system, there is a mechanical link between the measuring system and the object. For example, the Acrobot uses a digitizer arm to locate the position and orientation of the objects. The arm is anchored into the bone and each articulation of the arm measures all the spatial parameters with the length and the angular values between each arm’s segments.

The semi-contact system relies on a contact link between the anatomy and the marker and a contactless link between the markers and the cameras. The marker’s 3D positions are computed from a triangulation of marker’s 2D position in each camera of the optical system. Thanks to a unique spatial configuration of each markers attached to the object, the system is able to recognize the object. This method is actually used by the majority of navigation systems.

The contactless system is more recent and still a concept in development. In this case, there is nothing in contact with the patient’s anatomy. The tracking is done without the need to attach any markers on the patient. It has been possible with the apparition of depth cameras as shown by Liu et al. [1]. The depth camera is an active sensor that projects a structured light pattern onto the scene. Thanks to this projected pattern, the depth camera can reconstruct the 3D surface of the scene. In this case, the anatomy of the object becomes its own marker. The tracking is done by identifying and following the surfaces along the time. However, this method is at its early stages with only a proof of concept having been proposed in the literature. This method is very promising as it could track not only bone position but also bone shape modification. For the contact and semi-contact method, the user needs to take samples of the bone surface with a dedicated tool in order to register the bone surface with the tracking device, rather than the contactless system, which already delivers a sample of the 3D surface of the object. This bone surface sample will be used in the computing stage for bone registration.

For the augmented reality system, the semi-contact and contactless tracking system will be the most suitable. Figure 27.2 shows an example of a hybrid solution of semi-contact and contactless tracking, while the drill is tracked with an attached marker, the femoral head is tracked with the depth camera.

Fig. 27.2
figure 2

The setup for a proof of concept of an intraoperative augmented reality assistance system. This system introduces the use of a contactless tracking system for the femoral part and a visual feedback for the user in the augmented reality headset. A particularity of this setup is the automatic positioning of the depth camera by the robot arm which insures the visibility of the femoral part when an occlusion occurs. The tool is still tracked with a classical semi-contact tracking method with an attached marker (Adapted from [1])

The bottleneck for the contact and semi-contact tracking system is the need of a bone shape digitizing stage at the beginning of the procedure and for each bone modification control, which requires some time to be performed. However, they are robust to the bone shape modification because they rely on the markers to perform the tracking. For the contactless, the bottleneck relates to the depth sensor precision and the scene understanding of the 3D shape measured to separate and identify the different objects in the scene.

2.2 Computing

The computing part of the process consists in two operations. The first operation is the registration of the anatomical parts tracked with the preoperative images. The second operation is to compute clinical index from raw information, which, for example, compare the actual situation with the preoperative plan. The registration operation is required due to the different positions adopted by the patient during their preoperative imaging and the actual position of the patient in the OR. This operation needs to identify the corresponding elements from the preoperative information and the intraoperative one in order to compute the spatial transformation between them. In orthopaedic surgery, this operation is made easier because of the solid nature of the bone in contrast to solely soft tissue procedures. For the contact and semi-contact tracking method, this operation is already robust and used in most navigation systems. It usually uses a fiducial-based registration method. In this case, anatomical fiducials are annotated in the preoperative images and then the surgeon identifies them intraoperatively. It strives to find the spatial transformation, which will make the bone sample points fit with the preoperative surface of the bone. In this case, identification of the bone surface has been done by the surgeon, who themselves have recognized and sampled them. This registration is commonly done by applying an iterative closest point method between the surface issued from the computer tomography scan and intraoperative sample points. With the contactless method, this operation is more difficult because the depth camera records the environment, bones, soft tissues and the background. The first step of the analysis is to distinguish the nature of the tissues. Once this is complete, the bone surface measured could then be compared with the related bone surface measured in the preoperative medical imaging. The second computing operation focuses on the processing of positional information previously obtained for both the patient anatomy and the equipment in order to deliver clinically relevant information. For instance, this will compare the actual intraoperation situation with the preoperative or intraoperative planning. To this end, for each particular operation step, the relevant information will be compared. For example, during the TKA femoral extremity bone cut, the position of the oscillating saw will be compared with the femur position. This will allow the system to compute the relative position of the actual cutting plane with the preoperative plan. Then two valuable pieces of clinical information could be extracted, including the angular error between the normal planes and the error distance to the correct entry point in the bone.

2.3 Visualization

Visualization will produce the image that will be presented to the user. In the case of augmented reality, this image representing the digital reality must be aligned with surgeons’ reality. To this end, the tracking information of the objects and the headset allows then to define the viewpoint in the reality and render the virtual object at the correct location. Figures 27.3 and 27.4 show examples of visual feedback where clinical information is presented in the sight of the surgeon. In Fig. 27.4, this information is overlaid on the real object as opposed to what is shown in Fig. 27.3 where the information is placed outside the scene object as a virtual dashboard. The actual bottleneck is the time taken by the system to adapt to a change of situation, for example, in cases of fast motion, when the position of the viewpoint might be different between the moment when tracking information is measured, the moment when the image rendering is finished and finally the moment when the picture is displayed. This would create some inconsistency in the visual feedback. This problem might be resolved, by reducing the tracking and rendering delay, which could happen with improvement of the technological performance.

Fig. 27.3
figure 3

This image represents the view of the student through the AR headset. The green dot on the crossbar represents the distance to the target, vertically for the inclination angle and horizontally for the anteversion angle. This dot remains red until the error is less than 1° for both angles. (Adapted from [8])

Fig. 27.4
figure 4

This image shows the visual feedback seen by the user for the femoral head drilling component in hip resurfacing. The arrow indicates the entry point and orientation target. (a) The arrow is fully red because neither the orientation nor the entry point errors are, respectively, inferior to 1° and 1 mm. (b) The entry point error is lower than 1 mm, indicated by the green arrow tip. (c) The orientation error is less than 1°, indicated by the green complete green arrow (Adapted from [1])

3 How Augmented Reality Could Support Surgery?

The success of the total hip and knee replacements is related with the correct positioning of the implant. Precise implantation is of upmost importance when the implantation is personalized by considering the individual joint’s anatomy and kinematics. Namely, following a precise patient-specific planning, implantation has to be as much precise and a real-time feedback of this precision is needed, to ensure the final outcome is what the surgeon had planned. Augmented reality will soon integrate itself within different activities of orthopaedic practice. Indeed, several steps of the surgery may benefit a 3D representation of information. Several of the improvements would occur during preoperative and intraoperative planning, intraoperative assistance and training of the surgeon.

3.1 Preoperative and Intraoperative Planning

To prepare the surgery, the practitioner takes into account numerous facets of information. However, the shape of the anatomy is often presented with 2D information such as radiographs or CT scan layers or even 3D surface but presented as 2D representation on screens. The 3D nature of this information is then limited by the presentation interface. Thanks to the new augmented reality headset, which allow each eye to have its own screen, the planning can be now executed with 3D information being more accurately presented in a 3D interface. This would help the clinician to fully appreciate the spatial properties, depth perception, and ultimately benefit the quality of the implantation and potentially the clinical outcomes.

3.2 Intraoperative Assistance

Nowadays, several devices already provide intraoperative assistance, such as navigation or robotics systems. However, the main interface used to display the feedback information is a screen sitting aside the operating zone. This means that the surgeon must split their attention between the operating field and the screen. Augmented reality technology helps the surgeon to focus his attention on the patient, by overlaying feedback information directly into the field of view. This enhances the theatre ergonomics. The surgeon is able to appreciate information feedback from the navigation system whilst keeping the visual clues required for precise motor control of their gesture. A first step in this direction has been investigated by Pr Rodriguez who uses a projection of the navigation screen into the sight of the surgeon. This first simplified step avoids the need of precise positioning of the headset and the issue of real-time constraint needed for image processing, to ensure the reliability of the superposition of the feedback information directly onto the patient. In shoulder surgery, Pr Gregory [2] used the Hololens where registration was done manually. Because the Hololens localize itself in the room referential, as soon as the positioning error of the Hololens (±5 mm [3]) or the patient moves, the registration is no longer valid and needs to be corrected. This highlights the importance of the tracking stage in augmented reality technology.

Finally, personalized kinematic techniques for replacing hip and knee joints aim at reproducing the individual’s joint anatomy in addition to considering kinematics joint parameters. AR technology may improve precision in restoring the native anatomy and also enable better quality control after implantation of final components.

Soon, the technology will be ready to overlay all the information needed by the surgeon in order to proceed with the surgery. Such information might be used through all the operative steps, from the bone cutting plane orientation to the implant’s final position. Also, in certain conditions, once the procedure has started and bone cuts have been made, the position of several landmarks needed to mark the implant position might have been altered. These marks may no longer be reliable, jeopardizing the final implant position.

Some preclinical application tests have already been investigated for hip and knee surgeries. For hip arthroplasty surgery, Fotouhi et al. [4] used a real-time RGBD data overlay on C-arm data to help cup positioning in total hip arthroplasty, achieving a low error level for translation, anteversion, and abduction of 1.98 mm, 1.10°, and 0.53°, respectively. Liu et al. [1] used depth data with robotic assistance for hip resurfacing guide-hole drilling. The position and orientation of the drilled holes were compared with the preoperative plan and the mean errors were found to be approximately 2 mm and 2°. Van Duren et al. [5] used digital fluoroscopic imaging simulator using orthogonal cameras to track coloured markers attached to the guide-wire for the insertion of a dynamic hip screw. The accuracy of the algorithm was shown to increase with the number of iterations up to 20 beyond which the error asymptotically converged to an error of 2 mm. Hiranaka et al. [6] showed that using an augmented reality to project the fluoroscope monitor in the sight of the surgeon during a femoral head nail insertion helped in improving accuracies as well as radiation exposure and insertion time. In knee surgery, Dario et al. [7] used an augmented reality mechatronic tool for arthroscopy, which had an overall system error of 3–4 mm.

These preliminary results show that augmented reality could help the surgeon to gain in efficiency and safety during TKA and THA procedures, particularly in the context of personalized implant positioning.

3.3 Training

Augmented reality will soon have a major role to play in various aspects of medical practice. For example, augmented reality has been used in a training platform to provide feedback on the acetabular cup orientation relative to the target. By such means, the trainee could enhance their precision in placing the acetabular cup with optimal inclination and version with real-time feedback from the AR headset. This method might help to avoid any break in the visual feedback and refined motor control training. For example, an augmented reality training platform for acetabular cup positioning had nearly the same performance in training a medical student [8] as for conventional training with expert feedback. The visual feedback from the platform shown in Fig. 27.3 was comparable to expert feedback in training for this critical part of the THA.

4 Conclusion

Augmented reality technology will undoubtedly soon play an important role in assisting joint replacement surgery. Unlike computer-navigation system and robotics, it is likely that AR may similarly contribute to improving the precision of implantation with better intraoperative ergonomics and workflow, without adding significant extra-cost to the procedure. Some technological bottlenecks have to be solved before AR technology can be fully integrated in daily clinical practice.