Abstract
Successful surgical operations are characterized by preplanning routines to be executed during actual surgical operations. To achieve this, surgeons rely on the experience acquired from the use of cadavers, enabling technologies like virtual reality (VR) and clinical years of practice. However, cadavers, having no dynamism and realism as they lack blood, can exhibit limited tissue degradation and shrinkage, while current VR systems do not provide amplified haptic feedback. This can impact surgical training increasing the likelihood of medical errors. This work proposes a novel Mixed Reality Combination System (MRCS) that pairs Augmented Reality (AR) technology and an inertial measurement unit (IMU) sensor with 3D printed, collagen-based specimens that can enhance task performance like planning and execution. To achieve this, the MRCS charts out a path prior to a user task execution based on a visual, physical, and dynamic environment on the state of a target object by utilizing surgeon-created virtual imagery that, when projected onto a 3D printed biospecimen as AR, reacts visually to user input on its actual physical state. This allows a real-time user reaction of the MRCS by displaying new multi-sensory virtual states of an object prior to performing on the actual physical state of that same object enabling effective task planning. Tracked user actions using an integrated 9-Degree of Freedom IMU demonstrate task execution This demonstrates that a user, with limited knowledge of specific anatomy, can, under guidance, execute a preplanned task. In addition, to surgical planning, this system can be generally applied in areas such as construction, maintenance, and education.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Task planning is a major facet of numerous fields such as healthcare, construction, and transportation because it allows for greater accuracy and speed in completing important tasks. To increase the performance of task planning approaches, systems that can mimic the potential environment both with computational and physical approaches may provide significant help to the user. A task planning system that comprises singular components acting in unison toward a common goal is wholly functional if it can effectively execute a defined preplanned task set by a user.
In the current medical space for task planning, physicians use visual planning tools like scanned images from Computed Tomography (CT) (Marquez et al. 2021) in guiding them throughout a surgical operation Kersten-Oertel et al. (2013) or clinical diagnosis (Clymer et al. 2020). In addition, the use of detailed simulated anatomical environments (Dilley et al. 2020) has also been pursued as this approach may have added benefits for surgical navigation (Tamam and Poehling 2014). However, creating such accurate physical environments that are unique can be resource-intensive due to the variations in physiology from person to person (Pfeiffer et al. 2018). In surgery, this approach would be nearly impractical as the models would need to be both physically and visually realistic across a diversity of people. Furthermore, the use of cadavers as alternates for physically realistic models is challenging as they may not only fail to represent the anatomical physiology for specific procedures (Balta et al. 2015; Kennel et al. 2018) but are also not commercially widely available for unlimited use Kim et al. (2019). To this end, surgeons rely heavily on their years of clinical exposure which can restrict challenging surgical procedures to a few specialists (Curtis et al. 2020).
The use of virtual imagery in surgery (Yeung et al. 2021; Al Janabi et al. 2020; Lam et al. 2014) through immersive engagement in systems, like virtual reality (VR) platforms for planning, has been shown to be effective in surgical operations involving ortho-organic, traumatic, and microsurgery of the craniofacial skeleton (Efanov et al. 2018; Neumann et al. 1999; Lin et al. 2022). The current use of virtual imagery also provides opportunities for surgical technique iteration and examination in various clinical scenario settings as the surgical scene can be reused (Efanov et al. 2018). In these instances, task planning using AR systems has been based on three main areas: haptic feedback, sensitivity, and navigation; these areas make the approach comparable to industry standard VR systems/simulators that already employ these features (Fan et al. 2018; Carmigniani et al. 2011).
The primary feature, referred to as Haptic Feedback, is the information response that virtual environments provide to the user based on the user’s input or interaction. The information response can be characterized as a mix of sensory and force stimulation before, during, and after the user engagement with the target virtual platform and/or virtual object. Despite recent advances in realism in virtual environments, like the use of 3600 stereoscopic videos (Pulijala et al. 2018) to enhance three-dimensional interaction, faulty estimates in egocentric distances between the human observer and surrounding objects still exist (Geuss et al. 2015). These egocentric distances not only influence user interaction through depth perception (Creem-Regehr et al. 2015) but also limit the ability of a VR system to be effective in task execution as they are negatively perceived (Grushko et al. 2021). This negative perception of the haptic feedback due to misalignment further limits the realism associated with objects as they are deemed non-responsive rendering VR systems, inaccurate, and unreliable. Efforts to combat this non-responsiveness include introducing visual or tactile feedback (Tovares et al. 2014) like haptic devices that are worn by the user to physically notify them of object interference during task execution (Tan et al. 2021), attaching hand-held force magnifiers on surgical tools (Lee et al. 2012). Unfortunately, these add weight to the associated user as they can be bulky and can also require advanced skill application among users to ensure task completion, cutting out those with limited skills (Ritchie et al. 2021). Furthermore, the additional cost associated with using these VR systems to provide these capabilities limits widespread integration into the medical and clinical practice (Alaker et al. 2016).
The second feature, Sensitivity, is the ability of the system to account for and quantify changes within its immediate virtual environment. This directly correlates with the alignment of the virtual imagery projected onto a target object and parameters associated with the user interaction, e.g., time to recognize the discrepancy or hesitancy to proceed due to image misalignment (Deutschmann et al. 2008). In some instances, where VR Systems are used, like in surgical planning procedures, corrections of approximately 0.48 cm (Qiu et al. 2019) in virtual prostate simulations can be used to check organ dimensions and determine adjustments needed to be made for proper image correlation by the user. Thus, any misalignment that is expressed within the VR System can have an incorrect pathology diagnosis (Fan et al. 2021). The sensitivity is important because any minute difference in sensitivity can make a system unreliable and unusable (Abich et al. 2021).
The third feature of VR simulators, navigation, is the ability of the system to track the user and/or target object in a defined space to provide a feedback loop for image placement in the user’s field of view. Objects that move dynamically can have multiple coordinates as they shift in real-time. To obtain accurate measurements of these dynamic objects in scenarios involving virtual imagery, Inertial Measurement Unit Sensors (IMU) utilizing 6 degrees of freedom (DOF) (Luo et al. 2010) and optical trackers (Vishniakou et al. 2019) have been proposed. However, optical trackers still require a direct line of sight between the user and the target object, which can negatively impact surgical technique. IMUs can track object motion without the need to physically see the device, which can be a significant advantage (Kabuye et al. 2022).
Aside from these three features of VR systems that are needed to render them useful, additional issues exist. Inaccurate user hand-eye coordination (Mutasim et al. 2020) as well as cybersickness (Martirosov et al. 2022), while attempting to match virtual planning aids, can lead to difficulty in task planning execution. This difficulty in task planning execution is further exacerbated by the skill set of the user greatly varying from novice to expert (Jannin and Morineau 2018).
Given these limitations with current VR Systems, and to bridge these gaps, we introduce the use of augmented reality (AR) in surgical planning combined with physical 3D printed model systems.
AR is an approach that is related to the Reality–Virtuality (RV) continuum (Skarbez et al. 2021), which distinguishes objects from a real or virtual perspective. When a virtual object is brought into the physical environment, it is then referred to as AR (Liberatore and Wagner 2021). When a user interacts with an AR image or virtual environment in a physical space, then it is referred to as Mixed Reality (MR) (Carmigniani et al. 2011). These virtual environments are comprised of singular components that are generated and programmed to respond to user interactions either as a collective or within their singular pieces all in a module. User interactions are defined as reciprocal actions or influence of the user with the objects in their physical and virtual environment. Once these virtual environments are placed via projection or any other means onto any independent target, i.e., an object outside the AR system, that target then exhibits dynamic motion independent of the virtual imagery. Then, the components within these virtual environments can either match these dynamic motions or demonstrate a dynamic path for the independent target based on user interactions. Our approach uses the field of view of the user to predict the future state of the target and then, projects this predicted state in the form of an AR image into the field of space, enabling the visualization of the user with regard to their intended action on the independent target. AR has previously been used in educational settings (Garzón et al. 2019) to demonstrate learning gains and in healthcare scenarios to reduce the cognitive decline negatively associated with task performance when route planning is introduced (Pereira et al. 2019) as well as shorten task assembly times (Baird and Barfield 1999). AR has also been employed in mobile platforms for successful micro-surgical dissection (Renner et al. 2013).
We propose the combination of an AR environment and the 3D printed systems to enhance task planning in surgery. This approach is important as it demonstrates the required tactile, haptic feedback that would come from a user and object interaction through MR. Further, for surgical planning where the anatomy is both unique in texture and size, 3D bio-printing (Lee et al. 2019; Mirdamadi et al. 2020) of complete anatomical structures using hydrogels such as collagen is employed (Sei et al. 2014). Such a system will look and feel more realistic to the user, and by connecting an IMU to a surgical tool, the precise incision of the procedure can be not only felt but visually simulated.
This proposed combination of multi-level systems into a single MR system architecture provides avenues for task planning by not only incorporating realistic haptic feedback but also replicating complex anatomical pathologies (Tejo-Otero et al. 2019). These are critical for surgical technique planning to reduce the lengthy clinical exposure time in training that is required for positive and efficient surgical execution. In summary, efficiency in task execution could be improved using enabling technologies like augmented reality (Alves et al. 2021) combined with physical 3D printed specimens to preplan task execution.
2 Method
In this section, we detail the approach used in setting up the Mixed Reality Combination System (MRCS).
2.1 System overview
The proposed Mixed Reality Combination System (MRCS) has three main components: the AR environment, the Printed environment, and the IMU Tracking (Fig. 1). The AR environment contains the composite virtual imagery to be projected onto the target area. The AR Device serves as the conduit platform through which the AR environment is placed in the user’s field of view, encompassing the target area and the dynamic target object which includes the 3D bio-print. The User Platform consists of the User Interface Interaction module that houses the task planning schematic for any given task, the dynamic target on which the AR environment is projected, the user tracking system that relays user task location information (IMUs), and the user as the intended recipient of the task planning.
These three components work together to project imagery that will guide and inform a user in planning a task for successful execution. The overall goal here is a user utilizing that the MRCS platform would be able to not only see projected 3D imagery onto the 3D bio-printed samples in their field of view but also can interact with the 3D imagery dynamically. The 3D bio-printed samples will then add the realism expected from this interaction via haptic feedback through the 3D bio-printed sample, as the projected imagery adapts and guides the user to the next steps. This overall goal is then demonstrated and fully realized in a test scenario.
2.2 MRCS architecture overview
2.2.1 AR environment
The Augmented Reality Environment consists of 3D models, FEA modeling, and the Virtual Environment.
AR 3D models
Virtual environments can be created using 3D models of actual objects (Gasques Rodrigues et al. 2017). To accomplish this, AR imagery is first obtained by stitching together multiple images to create detailed composite models with the required level of object fidelity for printing. Stitching involves integrating multiple layers of images for an object to create a singular 3D image (Fig. 2). FEA is then integrated into these 3D objects to simulate multi-physics responses based on user interactions. Each dynamic interaction is modeled to reflect the timed interaction of the user with the target AR imagery to be projected in the user’s view. In the case of a surgical technique like a vertical incision, the dynamic interaction is the intersection of the scalpel with the tissue when the vertical cut is made which then leads the tissue to separate. Anatomage Inc. “slices” human anatomy resulting in layered models. These layers are then stitched together in the MRCS environment to result in 3D models of vascular and tissue structures for display. The overlay of the virtual models is done with the HMDs’ (i.e., Microsoft HoloLens 2 Head Mounted Display). The HoloLens 2 is mixed reality glasses that can be programmed to project virtual imagery into the user’s field of view. The HoloLens 2 has additional capabilities for programming that is used to track hand and eye movements. It can also use a spatial mapping feature on the target area to project updated virtual anatomical imagery to match the physical interactions of a user and a target 3D printed biospecimen (Evans et al. 2017).
Three-dimensional imagery (Fig. 2) can be obtained from a composite of 2D images of various objects intended to be in the field of view of the user. The process of stitching these 2D images can be done using multi 3rd party software. In this paper, we elect to use Anatomage software. The output of this 2D image fusion is shown in Fig. 3. The final 3D virtual image depends on the target Interaction module designed for the user. This allows for multiple iterations and combinations. The 3D imagery can also contain programmed dynamic motions that are triggered by user interaction with the dynamic target in the defined space as monitored by the HMD.
Dynamic FEA image modeling
Multi-physics modeling of dynamic interactions is added to the models using the third-party software ANSYS Inc., though any high-end commercial FEM package can be used. These dynamic interactions are representations of expected outcomes of the user interactions with the projected AR imagery (Fig. 4). The dynamic responses can be initiated in one of two instances when the AR system recognizes the user in this field. The first instance is by having an object boundary for noting where the position of the user is in relation to the projected image and the target object. The second instance is by using spatial mapping on the HMD to relate positions of the virtual environment in relation to the physical environment, so the dynamic interaction can follow the physical user interactions with the target object.
Virtual environment generation
A virtual environment creator is used to superimpose the dynamic modeling onto the AR Imagery. This approach adds dynamic interactions and responses to the projected AR imagery.
For this approach, the stitched AR imagery with the FEA module is uploaded to an interactive real-time 3D environment, and interaction scripts within the UNITY platform are either baseline added or manually authored by the researchers to allow for desired interaction with the imagery. These scripts (Fig. 5) can include user interactions such as a manipulation handler for engagement with the projected 3D imagery; an object manipulator to allow for image-defined distortion during the incision; an elastic manager to recognize different points at which the material properties from the FEA modeling need to match the physical incision act; and a bounds control to pair with the spatial mapping of the HMD to determine where the user and the target object are at any given time.
2.2.2 Printed environment
Bio printed specimen
The MRCS involves a biospecimen (Fig. 6) that is printed using a FlashForge 3D Printer customized to accept bio-print materials. The 3D prints use 3–4% alginate in an alginate support material cured for 24 h (Lee et al. 2019) to approximate human tissue properties like vasculature. The 3D bio-printed collagen-based bath material for specimen support is approximately \(60\,\upmu {\hbox {m}}\) circular gelatin particles that are suitable for printing features of 50–\(80\,\upmu {\hbox {m}}\) from a \(140\,\upmu {\hbox {m}}\) nozzle.
The 3D printed biospecimen is customized to reflect the surgical pathology for which surgical planning is difficult to navigate for a practicing clinician. The virtual environment is customized to add the level of detail usually reserved for actual human anatomy interaction such as vasculature (Wan et al. 2017) and scaffolds (Yu et al. 2017). Collagen, as a material, is chosen for the bio-printing of the specimen because it can mimic human tissue properties (Lee et al. 2019).
2.2.3 Tracking environment
Inertial measurement unit sensor
The user tracking module consists of two sub-components. The first is the HMD spatial mapping feature to visually track the user tools in the field of view. The second sub-component is a 9DOF IMU (InvenSense ICM20948) in the form of a flexible IMU (Fig. 7) that can be attached to the pivot of any user tool to track motions (Kabuye et al. 2022). In Fig. 7, the 9DOF IMU is shown attached to a surgeon’s scalpel. The wires attached to the flexible IMU are for transmitted signals and are connected to a power source. The signals can also be transmitted via Bluetooth. The user tracking is done to ensure task execution and completion.
2.2.4 MRCS sub-architecture
To have all these three components work, the MRCS relies on other sub-architecture components. These are the AR Device, the User Interaction Platform, and the User interaction module
AR device
The AR device is a platform with which virtual environments are placed in a physical environment through a user-defined interface. The AR device used here is the Microsoft HoloLens 2 Development Edition (MH2), which is a Head Mounted Display (HMD) worn by the user. It can stream virtual environments into the visual frame of the user via 3D imagery. The MH2 is programmed to use spatial mapping in the software application to identify the position of the dynamic target object and further overlay these virtual environments onto it. Finite Element Analysis (FEA) is used to model the physical motion of the 3D bio-printed object so that this information can be linked to the AR environment for feedback and motion of the projected system. Once user interaction is detected in the proximity of the virtual environment being projected onto the dynamic target, through spatial mapping of the space around the target object by the HMD, the dynamic responses from the 3D bio-printed specimen can be matched with a custom FEA Dynamic modeling outcome. This is done through the authored scripting of the HoloLens 2 application to recognize, through the HMD, when the set virtual boundary around the physical object is breached due to the dynamic motion of the 3D bio-printed specimen. This virtual dynamic response is done to match the physical environment feedback. The matching ensures that the virtual environment in the field of view of the user changes to a future expected state of the 3D printed biospecimen. The AR device can detect the motions of the target object and match them with the 3D imagery before, during, and after user interaction. This process is based on the customized FEA dynamic analysis performed to obtain simulated tissue reactions.
User platform
The User Platform consists of a User, the User Interaction Module, and the Dynamic Target. The “User” is the intended recipient of this task planning profile and who executes the task. The “User Interaction Module” is the set of instructions and commands that can inform the user of the interactions required during engagement with the MRCS. These commands also include visual aids that assist the user in planning a path for task execution. The “Dynamic Target” is an object that demonstrates independent motion when there is engagement from the user. In the later test scenario, the Dynamic Target is the 3D printed biospecimen. During an engagement with the target object, the user receives the haptic feedback in a closed-loop system, ensuring that the actions of the user and results from the physical interaction are recorded.
User interaction module
The User Interaction module consists of a software application developed in the Unity Platform. The user interface is designed to increase learning among users through the use of animations (Egan et al. 2015). The software application of the module is the actual task that the user is required to execute. Here, the test scenario instructs the user on how to make a surgical incision (Fig. 8). A series of steps in this process would include identifying the site of the incision followed by the direction of the surgical incision to be made. This portion serves as the navigation. The anatomy models are built using the Unity platform. These models have no dynamic properties as these properties are only mapped onto the target tissue
3 Test scenario
To demonstrate the functionality of the MRCS system, a scenario that requires task path planning is created. The chosen task is making a surgical incision into a simulated tissue. The chosen surgical technique is an incision to create access to the Obliquus Capitis Inferior and Rectus Capitis Posterior muscles located in the back of the head (Fig. 9) for positioning and trauma. These target site and technique are chosen because any additional pathologies such as overgrown masses or tumors in this area make the needling insertion technique difficult due to its confined space and proximal location to the spine and brain stem. Hence, a precise incision for access into this area is critical.
The initial incision cut is made by a surgical scalpel into a representative target tissue. The motion and pattern of the technique are a basic vertical cut (as in Fig. 8) into the sample to simulate entry into physical human tissue. To track the user during this task, the primary measurement of the IMU will be the absolute orientation that the tool makes during the incision. The secondary measurement for tracking is the depth recorded by the Head Mounted Display of the surgical tool in the field of view of the user.
Using the explicit dynamics module within ANSYS 2022, this approach is modeled as a surgical scalpel made of stainless steel interacting with a rectangular silicone block that has multiple through-holes, which approximate the vascular conduits in the tissue (Fig. 10) and the weight distribution across multiple skin textures. This models the interaction at different rates of motion. Because of the varying dermis in skin tissue, a new material assignment is created to replicate skin tissue behavior (Li et al. 2012; Vedbhushan et al. 2013; Joodaki and Panzer 2018) with the properties approximated in Table 1.
As an example of the explicit dynamics module based on the test scenario, a vertical motion of the scalpel during an incision is simulated. The first step is an incision in the target area. The equivalent velocity for this surgical incision is 0.015 m/s (Vedbhushan et al. 2013) for a 4-s total travel time to get a 15 mm depth incision. In the study, the incision is repeated a total of three times to demonstrate repeatability
In addition, the equivalent stress and strain for this approximated model (Fig. 11a) help understand the tissue properties relative to viscoelasticity. The numerical model of the bi-linear elasticity of collagen fibers is used for this approximation (Joodaki and Panzer 2018). The behavior of the simulated tissue in our explicit dynamics module is then reflective of the relationship that viscoelastic materials demonstrate when under axial stress through specimen fracture as shown in Fig. 11a. The Stress–Strain Plot specifically demonstrates this viscoelastic behavior as seen in similar brain tissue (Hosseini-Farid et al. 2019) showing an initial linear slope along with a viscoelastic response due to extreme tissue deformation that results in rupture.
The strain plot (Fig. 11b) further shows the three properties associated with a viscoelastic material; linear elastic region, followed by a plateau region before the rapture of the tissue, and then a densification region.
The 3D imagery results of this FEA dynamic interaction study are then uploaded to the interactive, real-time 3D environment Unity Platform. Inside the Unity Platform, a software application is created through script editing (Fig. 5). It uses not only the spatial mapping of the HMD but also hand tracking and boundary object interaction to determine when to initiate the simulation of the similar FEA modeling of the 3D printed biospecimen tissue under stress. This application can then be uploaded to the HMD for projection into a user’s field of view.
4 Deployment
The instructions for task path planning for a surgical incision are complemented with additional visual aids that instruct the user on how to perform the incision. These instructions are relayed via a user application interface that the user can commence and serve as the navigation portion of the demonstration. As the user engages with the 3D printed collagen-based specimen generating haptic feedback, the depth and the angular motion of the surgical cut are tracked with the 9DOF IMU and the HMD (Fig. 12). The tracking of the user with the IMU in the test scenario is meant to ascertain task completion and not user task accuracy as the test scenario demonstrates feasibility. The additional location data are also used as part of the HMD visual image projection to ensure that the AR environment is overlaid on the right location and target. This highlights its sensitivity.
The tracked vertical incision from the 9 DOF IMU is determined for all three incisions to demonstrate the depth of the incision as it is translated to the Pitch (z-axis) for an absolute angle orientation of the user (Fig. 12). This approximation is made to one axis because, at the point of interaction of the scalpel in a user’s hand with the 3D bio-print tissue, there is no lateral motion. The only motion that exists is the absolute orientation of the vertical cut with the user’s wrist as the pivot (Kabuye et al. 2022). The placement of the 9 DOF IMU on the side pivot of the scalpel also indicates that the corresponding axis for tracking will be the Z (Pitch) with a range of \(\pm \,180^\circ\). The time for each run is normalized to ensure that the three runs can be fully analyzed when compared to each other. To demonstrate user proficiency, the root mean square error (RMSE) difference between the three surgical incisions is calculated. The lower RMSE indicates that a user has been able to follow the path recommended by the MRCS for the surgical cut as they would fall within the preplanned bounds as measured by the HMD. It should be noted that the efficacy of the IMU to accurately track surgical tools has been demonstrated previously (Kabuye et al. 2022) and we demonstrate its integration in an MRCS environment as shown in Fig. 13.
Despite not being focused on task accuracy, the RMSE difference of \(2.3^\circ\) is within the expected range of \(2.9^\circ\) (Kabuye et al. 2022) for a tracked incision into simulated tissue (Table 2). However, the average depth of the incision made is higher than the target depth of 15 mm. This could be due to the limited number of vertical incisions required for proficiency in surgical planning as the test subject had no prior knowledge of the surgical procedure. However, another issue could arise from the 3D printed collagen-based specimen failing to provide a more amplified haptic response that would communicate to the user when to stop as the additional 3D imagery only visually shows the tissue separation. One way to correct this would be to either add additional dynamism in the 3D imagery corresponding to tissue vasculature and blood that could serve as another visual aid for the user or engage in post-process treatment of the 3D printed biospecimen to provide improved physical texture. The MRCS demonstrates haptic feedback, sensitivity, and navigation capability in this test scenario.
5 Conclusion
This work demonstrates the ability of the Mixed Reality Combination System (MRCS) to not only guide a user’s navigation as they are preplanning the task execution through the image visualization and interaction but also to track the task execution to quantify their skill set in achieving task completion. By pairing a 3D printed biospecimen and projecting virtual imagery onto it, an augmented reality environment is created for a user that allows them to plan a task prior to execution that is tracked using an Inertial measurement unit sensor.
At this early stage of the capability demonstration, the feasibility demonstration of the Mixed Reality platform is the primary goal. The accuracy of the user in the capability demonstration is secondary given their limited skill set as a novice. Other factors not under consideration, like additional mental workloads that the user would be under while engaged with the MRCS during task execution, could exist. This could include, but is not limited to, the mental work needed to adjust between the virtual and physical images in the same visual frame (Xi et al. 2022; Jeffri and Awang Rambli 2021). The additional mental workload would arise from not only processing (in the brain) the additional dynamism in the 3D imagery corresponding to vasculature that is input in the field of vision as the AR imagery for the user as a visual aid, but also fatigue arising from having the physical weight of the AR head-mounted display on the user (Buchner et al. 2022; Rho et al. 2020). This additional mental workload can be further evaluated to quantify its impact on user task performance in the future.
The application of this enabling technology in this regard demonstrates that the proposed MRCS platform can provide an improved iterative way for a user to increase exposure in task execution through complex guided paths. Future work will seek to study its accuracy in different environments and for path planning by surgeons.
References
Abich J, Parker J, Murphy JS et al (2021) A review of the evidence for training effectiveness with virtual reality technology. Virtual Real 25(4):919–933. https://doi.org/10.1007/S10055-020-00498-8/TABLES/2
Al Janabi HF, Aydin A, Palaneer S et al (2020) Effectiveness of the HoloLens mixed-reality headset in minimally invasive surgery: a simulation-based feasibility study. Surg Endosc 34(3):1143–1149. https://doi.org/10.1007/s00464-019-06862-3
Alaker M, Wynn GR, Arulampalam T (2016) Virtual reality training in laparoscopic surgery: a systematic review and meta-analysis. Int J Surg 29:85–94. https://doi.org/10.1016/j.ijsu.2016.03.034
Alves JB, Marques B, Dias P et al (2021) Using augmented reality for industrial quality assurance: a shop floor user study. Int J Adv Manuf Technol 115(1–2):105–116. https://doi.org/10.1007/S00170-021-07049-8/FIGURES/12
Baird KM, Barfield W (1999) Evaluating the effectiveness of augmented reality displays for a manual assembly task. Virtual Real 4(4):250–259. https://doi.org/10.1007/BF01421808
Balta JY, Lamb C, Soames RW (2015) A pilot study comparing the use of Thiel- and formalin-embalmed cadavers in the teaching of human anatomy. Anat Sci Educ 8(1):86–91. https://doi.org/10.1002/ase.1470
Buchner J, Buntins K, Kerres M (2022) The impact of augmented reality on cognitive load and performance: a systematic review. J Comput Assist Learn 38(1):285–303. https://doi.org/10.1111/jcal.12617
Carmigniani J, Furht B, Anisetti M et al (2011) Augmented reality technologies, systems and applications. Multimed Tools Appl 51(1):341–377. https://doi.org/10.1007/s11042-010-0660-6
Clymer DR, Long J, Latona C et al (2020) Applying machine learning methods toward classification based on small datasets: application to shoulder labral tears. J Eng Sci Med Diagn Ther. https://doi.org/10.1115/1.4044645
Creem-Regehr SH, Stefanucci JK, Thompson WB (2015) Perceiving absolute scale in virtual environments: how theory and application have mutually informed the role of body-based perception. Psychol Learn Motiv 62:195–224. https://doi.org/10.1016/BS.PLM.2014.09.006
Curtis NJ, Foster JD, Miskovic D et al (2020) Association of surgical skill assessment with clinical outcomes in cancer surgery. JAMA Surg 155(7):590. https://doi.org/10.1001/jamasurg.2020.1004
Deutschmann H, Steininger P, Nairz O et al (2008) “Augmented Reality’’ in conventional simulation by projection of 3-D structures into 2-D images. Strahlentherapie und Onkologie 184(2):93–99. https://doi.org/10.1007/s00066-008-1742-5
Dilley J, Singh H, Pratt P et al (2020) Visual behaviour in robotic surgery-Demonstrating the validity of the simulated environment. Int J Med Robot Comput Assist Surg. https://doi.org/10.1002/RCS.2075
Efanov JI, Roy AA, Huang KN et al (2018) Virtual surgical planning: the Pearls and pitfalls. Plastic Reconstr Surg Global Open. https://doi.org/10.1097/GOX.0000000000001443
Egan P, Schunn C, Cagan J et al (2015) Improving human understanding and design of complex multi-level systems with animation and parametric relationship supports. Des Sci. https://doi.org/10.1017/DSJ.2015.3
Evans G, Miller J, Iglesias Pena M, et al (2017) Evaluating the Microsoft HoloLens through an augmented reality assembly application. In: Sanders-Reed JJN, Arthur JTJ (eds) Degraded environments: sensing, processing, and display 2017, p 101970V. https://doi.org/10.1117/12.2262626, http://proceedings.spiedigitallibrary.org/proceeding.aspx?doi=10.1117/12.2262626
Fan M, Yang X, Ding T et al (2021) Application of ultrasound virtual reality in the diagnosis and treatment of cardiovascular diseases. J Healthc Eng. https://doi.org/10.1155/2021/9999654
Fan Z, Ma C, Zhang X, et al (2018) 3D augmented reality-based surgical navigation and intervention. In: Mixed and augmented reality in medicine. CRC Press, pp 251–263. https://doi.org/10.1201/9781315157702-17,
Garzón J, Pavón J, Baldiris S (2019) Systematic review and meta-analysis of augmented reality in educational settings. Virtual Real 23(4):447–459. https://doi.org/10.1007/S10055-019-00379-9/TABLES/9
Gasques Rodrigues D, Jain A, Rick SR, et al (2017) Exploring mixed reality in specialized surgical environments. In: Proceedings of the 2017 CHI conference extended abstracts on human factors in computing systems—CHI EA ’17. ACM Press, New York, New York, USA, pp 2591–2598. https://doi.org/10.1145/3027063.3053273,
Geuss MN, Stefanucci JK, Creem-Regehr SH et al (2015) Effect of display technology on perceived scale of space. Hum Factors J Hum Factors Ergon Soc 57(7):1235–1247. https://doi.org/10.1177/0018720815590300
Grushko S, Vysocký A, Oščádal P et al (2021) Improved mutual understanding for human–robot collaboration: combining human-aware motion planning with haptic feedback devices for communicating planned trajectory. Sensors 21(11):3673. https://doi.org/10.3390/s21113673
Hosseini-Farid M, Rezaei A, Eslaminejad A et al (2019) Instantaneous and equilibrium responses of the brain tissue by stress relaxation and quasi-linear viscoelasticity theory. Scientia Iranica. https://doi.org/10.24200/sci.2019.21314
Jannin P, Morineau T (2018) Cognitive oriented design and assessment of augmented reality in medicine. Mixed Augment Real Med. https://doi.org/10.1201/9781315157702-8
Jeffri NFS, Awang Rambli DR (2021) A review of augmented reality systems and their effects on mental workload and task performance. Heliyon 7(3):e06,277. https://doi.org/10.1016/j.heliyon.2021.e06277
Joodaki H, Panzer MB (2018) Skin mechanical properties and modeling: a review. Proc Instit Mech Eng Part H J Eng Med. https://doi.org/10.1177/0954411918759801
Kabuye E, Hellebrekers T, Bobo J et al (2022) Tracking of scalpel motions with an inertial measurement unit system. IEEE Sens J 22(5):4651–4660. https://doi.org/10.1109/JSEN.2022.3145312
Kennel L, Martin DMA, Shaw H et al (2018) Learning anatomy through Thiel- vs. formalin-embalmed cadavers: student perceptions of embalming methods and effect on functional anatomy knowledge. Anat Sci Educ 11(2):166–174. https://doi.org/10.1002/ase.1715
Kersten-Oertel M, Jannin P, Collins DL (2013) The state of the art of visualization in mixed reality image guided surgery. Comput Med Imaging Graph 37:98–112. https://doi.org/10.1016/j.compmedimag.2013.01.009
Kim DH, Kim Y, Park JS et al (2019) Virtual reality simulators for endoscopic sinus and skull base surgery: the present and future. Clin Exp Otorhinolaryngol. https://doi.org/10.21053/ceo.2018.00906
Lam CK, Sundaraj K, Sulaiman MN (2014) Computer-based virtual reality simulator for phacoemulsification cataract surgery training. Virtual Real 18(4):281–293. https://doi.org/10.1007/S10055-014-0251-3/TABLES/2
Lee A, Hudson AR, Shiwarski DJ et al (2019) 3D bioprinting of collagen to rebuild components of the human heart. Science 365(6452):482–487. https://doi.org/10.1126/science.aav9051
Lee R, Wu B, Klatzky R, et al (2012) Hand-held force magnifier for surgical instruments: evolution toward a clinical device. Lecture notes in computer science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 7815 LNCS:77–89. https://doi.org/10.1007/978-3-642-38085-3_9
Li C, Guan G, Reif R et al (2012) Determining elastic properties of skin by measuring surface waves from an impulse mechanical stimulus using phase-sensitive optical coherence tomography. J R Soc Interface 9(70):831–841. https://doi.org/10.1098/rsif.2011.0583
Liberatore MJ, Wagner WP (2021) Virtual, mixed, and augmented reality: a systematic review for immersive systems research. Virtual Real. https://doi.org/10.1007/s10055-020-00492-0
Lin W, Zhu Z, He B et al (2022) A novel virtual reality simulation training system with haptic feedback for improving lateral ventricle puncture skill. Virtual Real 26(1):399–411. https://doi.org/10.1007/S10055-021-00578-3/TABLES/3
Luo Z, Lim CK, Chen IM et al (2010) A virtual reality system for arm and hand rehabilitation. Front Mech Eng 6(1):23–32. https://doi.org/10.1007/S11465-011-0202-6
Marquez P, Volk GF, Maule F et al (2021) The use of a surgical planning tool for evaluating the optimal surgical accessibility to the stapedius muscle via a retrofacial approach during cochlear implant surgery: a feasibility study. Int J Comput Assist Radiol Surg 16(2):331–343. https://doi.org/10.1007/S11548-020-02288-8/FIGURES/7
Martirosov S, Bureš M, Zítka T (2022) Cyber sickness in low-immersive, semi-immersive, and fully immersive virtual reality. Virtual Real 26(1):15–32. https://doi.org/10.1007/S10055-021-00507-4/FIGURES/14
Mirdamadi E, Tashman JW, Shiwarski DJ et al (2020) FRESH 3D bioprinting a full-size model of the human heart. ACS Biomater Sci Eng 6(11):6453–6459. https://doi.org/10.1021/acsbiomaterials.0c01133
Mutasim AK, Stuerzlinger W, Batmaz AU (2020) Gaze tracking for eye-hand coordination training systems in virtual reality. In: Extended abstracts of the 2020 CHI conference on human factors in computing systems. ACM, New York, NY, USA, pp 1–9. https://doi.org/10.1145/3334480.3382924
Neumann P, Siebert D, Schulz A et al (1999) Using virtual reality techniques in maxillofacial surgery planning. Virtual Real 4(3):213–222. https://doi.org/10.1007/BF01418157
Pereira N, Kufeke M, Parada L et al (2019) Augmented reality microsurgical planning with a smartphone (ARM-PS): a dissection route map in your pocket. J Plastic Reconstr Aesthet Surg 72(5):759–762. https://doi.org/10.1016/j.bjps.2018.12.023
Pfeiffer M, Kenngott H, Preukschas A et al (2018) IMHOTEP: virtual reality framework for surgical applications. Int J Comput Assist Radiol Surg. https://doi.org/10.1007/s11548-018-1730-x
Pulijala Y, Ma M, Pears M et al (2018) An innovative virtual reality training tool for orthognathic surgery. Int J Oral Maxillofac Surg 47(9):1199–1205. https://doi.org/10.1016/j.ijom.2018.01.005
Qiu K, Qin T, Gao W et al (2019) Tracking 3-D motion of dynamic objects using monocular visual-inertial sensing. IEEE Trans Robot 35(4):799–816. https://doi.org/10.1109/TRO.2019.2909085
Renner RS, Velichkovsky BM, Helmert JR (2013) The perception of egocentric distances in virtual environments—a review. ACM Comput Surv. https://doi.org/10.1145/2543581.2543590
Rho G, Callara AL, Condino S, et al (2020) A preliminary quantitative EEG study on Augmented Reality Guidance of Manual Tasks. In: 2020 IEEE International symposium on medical measurements and applications (MeMeA). IEEE, pp 1–5. https://doi.org/10.1109/MeMeA49120.2020.9137171
Ritchie J, Bontilao J, Kennelly S, et al (2021) COMFlex: an adaptive haptic interface with shape-changing and weight-shifting mechanism for immersive virtual reality. In: Asian CHI Symposium 2021. ACM, New York, NY, USA, pp 210–214. https://doi.org/10.1145/3429360.3468214,
Sei Y, Justus K, Leduc P et al (2014) Engineering living systems on chips: from cells to human on chips. Microfluid Nanofluid 16(5):907–920. https://doi.org/10.1007/S10404-014-1341-Y/FIGURES/6
Skarbez R, Smith M, Whitton MC (2021) Revisiting Milgram and Kishino’s Reality-Virtuality Continuum. Front Virtual Real 2:27. https://doi.org/10.3389/frvir.2021.647997
Tamam C, Poehling GG (2014) Robotic-assisted unicompartmental knee arthroplasty. Sports Med Arthrosc Rev 22(4):219–222. https://doi.org/10.1097/JSA.0000000000000043
Tan S, Roosa RD, Klatzky RL et al (2021) A soft wearable tactile device using lateral skin stretch. In: 2021 IEEE world haptics conference. WHC 2021, pp 697–702. https://doi.org/10.1109/WHC49131.2021.9517185
Tejo-Otero A, Buj-Corral I, Fenollosa-Artés F (2019) 3D printing in medicine for preoperative surgical planning: a review. Ann Biomed Eng 48(2):536–555. https://doi.org/10.1007/S10439-019-02411-0
Tovares N, Boatwright P, Cagan J (2014) Experiential conjoint analysis: an experience-based method for eliciting, capturing, and modeling consumer preference. J Mech Des. https://doi.org/10.1115/1.4027985
Vedbhushan ST, Mulla MA, Haroonrasid et al (2013) Surgical incision by high frequency cautery. Indian J Surg 75(6):440. https://doi.org/10.1007/S12262-012-0520-X
Vishniakou I, Plöger PG, Seelig JD (2019) Virtual reality for animal navigation with camera-based optical flow tracking. J Neurosci Methods 327:108,403. https://doi.org/10.1016/j.jneumeth.2019.108403
Wan L, Skoko J, Yu J et al (2017) Mimicking Embedded vasculature structure for 3D cancer on a chip approaches through micromilling. Sci Rep 7(1):1–8. https://doi.org/10.1038/s41598-017-16458-3
Xi N, Chen J, Gama F et al (2022) The challenges of entering the metaverse: an experiment on the effect of extended reality on workload. Inf Syst Front. https://doi.org/10.1007/s10796-022-10244-x
Yeung AWK, Tosevska A, Klager E et al (2021) Virtual and augmented reality applications in medicine: analysis of the scientific literature. J Med Internet Res 23(2):e25,499. https://doi.org/10.2196/25499
Yu JZ, Korkmaz E, Berg MI et al (2017) Biomimetic scaffolds with three-dimensional undulated microtopographies. Biomaterials 128:109–120. https://doi.org/10.1016/j.biomaterials.2017.02.014
Acknowledgements
The authors would like to thank Anatomage, Inc., for providing their digital cadaver models. The authors also thank Andres Arias Rosales and Joshua Gyory for their comments on this manuscript. This work was partially supported by the Office of Naval Research (Grant N00014-17-1- 2566), the National Institute of Health (R21AR08105201A1; R01AG06100501A1), the Air Force Office of Scientific Research (FA9550-18-1- 0262), and the National Science Foundation (CMMI-1946456)
Funding
Open Access funding provided by Carnegie Mellon University.
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kabuye, E., LeDuc, P. & Cagan, J. A mixed reality system combining augmented reality, 3D bio-printed physical environments and inertial measurement unit sensors for task planning. Virtual Reality 27, 1845–1858 (2023). https://doi.org/10.1007/s10055-023-00777-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10055-023-00777-0