Journal of Digital Imaging

, Volume 31, Issue 1, pp 56–73 | Cite as

A Hybrid 2D/3D User Interface for Radiological Diagnosis

  • Veera Bhadra Harish Mandalika
  • Alexander I. Chernoglazov
  • Mark Billinghurst
  • Christoph Bartneck
  • Michael A. Hurrell
  • Niels de Ruiter
  • Anthony P. H. Butler
  • Philip H. Butler


This paper presents a novel 2D/3D desktop virtual reality hybrid user interface for radiology that focuses on improving 3D manipulation required in some diagnostic tasks. An evaluation of our system revealed that our hybrid interface is more efficient for novice users and more accurate for both novice and experienced users when compared to traditional 2D only interfaces. This is a significant finding because it indicates, as the techniques mature, that hybrid interfaces can provide significant benefit to image evaluation. Our hybrid system combines a zSpace stereoscopic display with 2D displays, and mouse and keyboard input. It allows the use of 2D and 3D components interchangeably, or simultaneously. The system was evaluated against a 2D only interface with a user study that involved performing a scoliosis diagnosis task. There were two user groups: medical students and radiology residents. We found improvements in completion time for medical students, and in accuracy for both groups. In particular, the accuracy of medical students improved to match that of the residents.


3D input Hybrid user interface Diagnostic radiology Medical visualization User interface 


Imaging modalities such as X-ray computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, and nuclear imaging are the basis for diagnostic radiology. The quality of diagnosis depends on the radiologist’s ability to identify features or anomalies within the data, and accurately measure and report the findings. Some diagnosis tasks involve the rotation of one or more of the three anatomical planes. These tasks are often difficult using conventional radiology software as they involve using a 2D input device, such as a mouse, to manipulate a 2D plane in 3D. Mouse input is precise for 2D manipulation tasks; however, previous research shows that using the mouse for 3D manipulation can be difficult [1, 2, 3]. A study also showed that radiologists were faster with a 6 degrees of freedom (DOF) device compared to the mouse, in reformatting a slice in 3D data [4]. It is challenging to design a system that improves 3D manipulation while maintaining the benefits of mouse interaction.

In this paper, we introduce a hybrid system combining 2D and 3D interface components for diagnostic radiology. The 3D component consists of 3D stylus input and a stereoscopic 3D display with head tracking, while the 2D component consists of mouse input and a 2D display (see Fig. 1). Our system displays anatomical objects in 3D along with a 2D slicing plane, on the zSpace stereoscopic display [5]. The anatomical object, as well as the 2D plane, can be manipulated in 3D using 3D stylus input. The plane is also displayed synchronously on the 2D display. This enables experienced users to utilize familiar 2D views to see slice information while using 3D stylus input for interaction. It also allows less experienced users to utilize the 3D display, to gain an overview of the anatomy, which in turn helps them identify interesting slices and explore the information within.
Fig. 1

An image of the hybrid interface showing the 2D display at the top, followed by the zSpace display at the bottom. The image also shows mouse, keyboard, and the 3D stylus input

We also present an evaluation of our system for a scoliosis radiology diagnosis task comparing our hybrid interface to a standard 2D interface, and a 3D only interface. We chose two groups: radiology residents as our more experienced group and medical students as our less experienced group. The results demonstrate that our hybrid system improves the performance of diagnosis by providing improved 3D manipulation compared to the 2D interface.

The next section discusses related work in 3D user interfaces, virtual reality (VR), 3D manipulation, stereoscopic 3D, 3D input, and hybrid interfaces in the medical domain. This is followed by a formal description of our prototype hybrid system, how the system was evaluated, the results of the user study, a discussion of results, and a plan for future work.

Related Work

Researchers have long investigated virtual reality (VR) for viewing and interacting with medical volumetric datasets. An overview of VR in the medical domain can be found in [6, 7] and [8]. More recently, a VR user interface for viewing 3D medical images was proposed by Gallo et al. [9]. One of the key elements of using VR for medical visualization is being able to support intuitive 3D interaction. This is a challenging topic in VR, with a lot of research conducted in 3D input techniques. An overview can be found in Hand’s survey of 3D interaction techniques [10].

Performing 3D manipulation using a 2D mouse is a common task in fields such as computer-aided design (CAD), 3D modeling, surgery simulations, and desktop VR. Particularly, 3D rotation is commonly used for exploring medical data. The most commonly used 3D rotation techniques for mouse input include Shoemake’s ARCBALL [11] and a virtual trackball surrounding the object [12]. A study comparing these techniques by Bade et al. [13] concluded that design principles were crucial for 3D rotation. The study also highlighted the need for intuitive interaction techniques.

As an alternative to using the mouse for 3D manipulation, 3D input devices were developed to offer more DOF for 3D manipulation. Some early 3D input devices, developed particularly for exploring volumetric medical data, include Hinckley’s prop system [14] and the Cubic mouse [15]. More recently, the Wiimote [16] and the Leap Motion controller [17] have been used to capture natural gesture input. Furthermore, researchers focused on using contact free gesture input, for example, a controller-free tool for exploring medical images [18] and a touchless interface for image visualization in urological surgery [19].

Previous research suggests that 3D input devices outperform the mouse for the 3D manipulation tasks. A 1997 study by Hinkley et al. compared usability of 3D rotation techniques and found that 3D orientation input devices were faster than 2D input without sacrificing the accuracy [2]. Balakrishnan et al. modified a traditional mouse by adding two additional DOF forming the Rockin’Mouse, which was able to achieve 30% better performance for 3D interaction over mouse input [20]. A study from Dang et al. compared input devices for 3D exploration and found that a wand interface provided the best performance [21]. A study evaluating glove input in 2D and 3D for medical image analysis from Zudilova et al. showed that using more DOF improved performance and accuracy [22].

Contrary to the above studies, a study from Bérard et al. [23] found that 2D mouse (with 2 DOF) used with three orthographic views outperformed 3D input devices (with 3 DOF) used with a stereoscopic perspective view. A later modification to the task that involved adding pop-up depth cues within the stereoscopic perspective view showed that the 3D input device outperformed the 2D mouse [24]. These studies led us to expect that the 3D stylus input with 6 DOF from zSpace should also provide similar benefits/improvements in 3D manipulation over 2D mice.

Most systems still strongly rely on some level of 2D user interaction. Hence, researchers started finding ways to combine 2D and 3D interaction to form hybrid user interfaces. Since VR hardware had limited resolution, early hybrid interfaces focused on providing means for completing high-resolution tasks in virtual environments [25]. The most common approach has been integrating handheld and touchscreen devices into virtual environments for 2D interaction [26, 27, 28, 29, 30]. Others have focused on providing hybrid input, such as the Virtual Tricoder [31], Pinch Glove-based 2D/3D cursor approach [32], Pick and Drop approach [33], and other tangible user interfaces [34].

A summary of VR in health care can be found in [35], where they highlighted the need for more research focused on applications of VR in health care. A summary of VR in medicine can be found in [36], where they discuss the VR research in the medical domain, outlining major breakthroughs, and its impact in providing healthcare services. A more recent meta-analysis of VR research in medicine can be found in [37].

Some developments involved multi-display setups, for example, the hybrid interface for visual data exploration combining the 3D display (with glove input) and a tablet PC, which was proposed by Baumgärtner et al. [38]. Similarly, in the medical domain, another hybrid interface for manipulation of volumetric medical data for liver segmentation refinement was developed by Bornik et al. [39]. This interface combined a tablet PC and an immersive VR environment to improve 3D manipulation. We extend their approach, our aim was to combine desktop VR with stylus input and traditional 2D displays to improve 3D manipulation for diagnostic radiology.

A 3D volume navigation tool for diagnostic radiology, that used a 3D mouse to provide both 2D as well as 3D input for exploring volumetric data, was developed by Teistler et al. [40]. A tool for post-mortem CT visualization using a 3D gaming controller for interaction along with an active stereo screen was proposed by Teistler et al. [41]. Another approach from Graves et al. [4] used a fixed 6 DOF controller to reformat a slice in 3D data and found that radiologists were faster with their approach, compared to mouse input from the standard radiology workstation.

The literature shows that a number of researchers explored the benefits of hybrid interfaces for 3D object manipulation. However, as far as we are aware, there are very few medical interfaces that use hybrid input, and none that combine 2D and 3D screen viewing with stylus interaction in diagnostic radiology. Hence, we present a hybrid user interface for diagnostic radiology and an evaluation against the existing 2D interface, comparing novice and experienced users.

Hybrid User Interface

This section describes the prototype hybrid interface we have designed. It starts by reviewing the hardware and software components, followed by an overview of interface design and walkthrough. Finally, we discuss various interaction tools.

Hardware Setup

The hardware setup consists of two parts: the zSpace desktop VR system and a traditional 2D system. The zSpace consists of a 24 inch (1920 ×1080 pixels) passive stereo display with a refresh rate of 120 Hz. The zSpace screen also has embedded cameras for tracking. A pair of tracked circularly polarized glasses is used for viewing the stereo display. The zSpace also uses a 6 DOF 3D stylus, with a polling rate of 1000 Hz. The user’s head and 3D stylus are tracked in 6 DOF by the cameras embedded in the display. The display is angled at 45 from the desk for convenient viewing (see Fig. 1).

The 2D component of our system consists of a 2D monitor, and keyboard and mouse input. The 2D monitor is placed perpendicular to the desk, above the zSpace display. The distance from the 2D screen to the user is approximately 30 cm. The keyboard is placed at the center below the zSpace display, with the mouse and 3D stylus, placed to the side. A single workstation PC (3.6 GHz Intel Core i5-3470, NVidia Quadro K5200) is used to drive both systems.

Software Setup

The software system consists of a single application that drives the desktop VR system as well as the 2D system. The application is built on top of the MARS Vision framework [42, 43, 44]. MARS vision contains DICOM dataset handling code based on DCMTK [45] for loading volumetric datasets, as well as the 2D desktop user interface, which is built using the Qt toolkit [46]. The desktop VR user interface uses a custom rendering engine built with OpenGL [47] and CUDA [48] APIs. The software was programmed in C + + with Visual Studio IDE for the Windows 7 operating system.

Once an anatomical dataset is loaded, it is visualized on both systems simultaneously. The three anatomical planes showing slice information are displayed on the 2D system along with other application controls, while an extracted 3D model and a 3D user interface are displayed on the VR system (see Fig. 1). Interaction with the data is possible with either system; however, the dataset loading and application controls are limited to the 2D desktop system. User interaction with the 2D system is performed using the 2D mouse while the VR system uses a 3D stylus. A keyboard is used for text input in both systems.

Interface Design

2D Interface

The 2D user interface was designed to closely resemble the standard radiology diagnosis tool, Inteleviewer PACS Viewer (Intelerad Medical Systems) [49]. This was done to minimize the differences for the users already familiar with traditional radiology software. The functionality, keyboard, and mouse controls were also mapped in a similar fashion to that of Inteleviewer.

The 2D interface contains a fixed application controls tab on the left, followed by the three anatomical planes and an arbitrary plane in a two by two grid layout as shown in Fig. 2. The grid layout can be re-arranged to the desired format. Each slice view also displays a scale, that is positioned to the right and bottom edges of the slice view. The user can interact with the 2D interface using mouse input alone, or in combination with a keyboard.
Fig. 2

2D interface showing the three anatomical planes, and the arbitrary plane (top right) in a 2 ×2 grid. The application controls tab lies to the left of the screen

The mouse can be used to select and interact with the 2D slice views using the left mouse button. The user can hold the left mouse button and move the mouse to pan the slice, and use the scroll wheel for scrolling through slices. This can also be done by selecting the scroll bar at the left of each slice view with the left mouse button and dragging it or using the spinner at the top right corner of the slice view, see Fig. 3. The scroll wheel can be used while holding the “Shift” key on the keyboard for changing the scale of the slice content.
Fig. 3

2D interface showing the slice view. The scroll bar is to the left, slice control buttons including the PRS (preset window/level), ruler button to change interaction modes between annotation and measurement, slice spinner are to the top right and the scale to the right and bottom shows length

The window and level can be changed by holding the middle or scroll mouse button and moving the mouse horizontally or vertically. Preset window and level settings can be directly selected using the “PRS” button at the top of the slice view. The right mouse button is used to place annotations and measurements on the slice based on the interaction mode. Interaction modes can be changed using the application controls tab to the left (see Fig. 2), or the “ruler” icon at the top of the slice view (see Fig. 3).

We also allow the possibility to remap functionality to any mouse buttons or the scroll wheel. This was done to allow experienced users, who had been using the mouse a certain way, the ability to remap the functionality to match their desired way. This would eliminate the need for experienced users to unlearn their preferred mouse interaction methods in favor of a fixed mapping in our 2D interface.

3D Interface

The 3D interface contains a 3D scene that shows a model, representing an iso-surface extracted from the volumetric CT data, with a strip of user interface buttons at the bottom as shown in Fig. 4. The scene is rendered in stereoscopic 3D relative to the user’s tracked head position. The buttons are distinct 3D models created to represent various functions such as enabling the 2D plane, and changing between various interaction modes such as the 3D annotation mode and the 3D measurement mode (see Table 1). Buttons can be selected by touching them with the stylus. When selected, each button enlarges, its function is described underneath, and a translucent billboard with stylus controls is displayed for reference. While selected, a button can be triggered by pressing the front stylus button, see Fig. 5. The 3D stylus is the primary input for the 3D interface, while the keyboard is still used for text input.
Fig. 4

3D interface showing the zSpace display with stylus interaction. A model extracted from the CT dataset is shown along with the anatomical plane, and the 3D buttons at the bottom of the screen

Table 1

Functionality of stylus “left” and “right” buttons for various 3D interaction modes

Interaction mode

Left button

Right button

Model manipulation

Translate without rotating.

Increase or decrease the model scale.

Point annotation

Delete the selected annotation.

Insert annotation at the end of the stylus.

Line measurement

Delete the entire line connected by the selected point.

Insert an endpoint of the line to measure.

Angle measurement

Clear selected lines.

Select lines to measure the angle between.

The stylus contains three buttons: front, left, and right buttons, see Fig. 5. The front button is always used for picking and manipulating the model or the anatomical plane, as well as triggering all the 3D buttons. The functionality of the left and right buttons depends on the selected mode (see Table 1).

Fig. 5

zSpace 3D stylus showing its front, left, and right buttons

Interface Walkthrough

When a CT dataset is loaded, the 2D interface displays the three anatomical planes. The arbitrary plane is initially set to replicate the coronal plane.

A 3D mesh is extracted in real-time from the CT dataset using the Marching Cubes algorithm [50]. The mesh extraction is performed on the GPU using CUDA. This is done each time it requires a change and not every frame.

However, the extraction time (15 milliseconds) is fast enough to be real-time. This mesh model is displayed on the 3D interface. Once the model is extracted, the rendering cost for one frame (for both eyes) is about 3 milliseconds. This results in an average frame rate of 330 frames per second (FPS) per eye.

The user can interact with the 2D interface by selecting and scrolling through any anatomical plane using the mouse. If the 3D plane is enabled in the 3D interface, its position and orientation on the object are synchronized to match the selected 2D plane. This concept of a synchronized slice being displayed in both 2D and 3D views is similar to the approach used by Teistler et al. [40].

The part of the model in front of the plane is rendered translucent, where the level of transparency can be adjusted by the user. The model’s front portion can also be made completely transparent, which is similar to the plane clipping technique used by Dai et al. [51]. The plane displays the CT densities from the CT dataset. This provides a rapid assessment of the arbitrary plane’s position in the model. The user can synchronize the arbitrary plane orientation and position to any of the three orthogonal planes by selecting the orthogonal plane by left clicking on it while holding down the [control] key.

The user can rotate the arbitrary plane to a desired orientation using the mouse in the 2D interface. This can be done by holding down the [shift] key on the keyboard along with the left mouse button and moving the mouse. Horizontal movement rotates the plane around the y-axis while vertical movement rotates the plane around the x-axis. The slice is always rotated about its own center point. This is similar to the interaction method used in Inteleviewer.

When the arbitrary slice is selected, its guidelines are displayed on the other orthogonal views. These guidelines can also be used to manipulate the position and orientation of the arbitrary slice. These actions are synchronized with the plane in the 3D interface.

Annotations are placed by right clicking on the slice, while the annotation mode is selected. Lines can be drawn by right clicking on the slice to start the line and dragging to the desired endpoint, while the line measurement mode is selected. The line can then be moved, or modified by moving its endpoints.

Angle measurement mode allows the selection of two lines to measure the angle. If only two lines are drawn, they are automatically selected and the angle between them is displayed. Annotations and measurements on the slice view, are also displayed in the 3D interface. Displaying 2D measurements in 3D is similar to the approach used by Preim et al. [52].

In the 3D interface, the 3D model resides mostly in the negative parallax region. The stereoscopic screen being angled from the desk, combined with the head tracking, makes the model appear to float over the screen.

The 3D model can be manipulated by simply touching it with the stylus, and pressing the front stylus button to pick it up. Once picked up, the model is attached to the end of the stylus, and it follows the hand motion and orientation in a one to one fashion. The user can release the front button once the desired position and orientation is reached, see Fig. 6.
Fig. 6

An image of the 3D interface showing model and slice plane rotation using the 3D stylus input

Similarly, the user is able to simply pick and move or rotate the arbitrary plane as desired, see Fig. 6. The plane maintains its relative position to the 3D model when the model is manipulated. Changes made to the plane orientation are always updated in the 2D interface. This can be seen in Fig. 1, where the 3D plane and the arbitrary plane (top right) are showing the same information.

The user can also perform other functions, such as annotations or measurements, in the 3D interface, as shown in Fig. 7. Annotations and measurements can be placed using the stylus, by first selecting the appropriate mode using the 3D mode buttons at the bottom of the 3D interface, see Fig. 7.
Fig. 7

An image of the 3D interface showing annotations and measurements tools used for a scoliosis angle measurement task

Annotations can be placed in by pressing the right stylus button. An annotation is placed at the end of the virtual stylus, represented by a sphere. Once placed, the keyboard is used to enter a name for it. The left stylus button can be used to delete an annotation.

Line measurements can be placed using the right stylus button to place the start and end points of the desired line, sequentially. The points are inserted at the end of the stylus each time the right stylus button is pressed. The endpoints are represented using spheres, with a 3D line between them. The line can be deleted, by touching one of its endpoints and pressing the left stylus button.

Angle can be measured using the right stylus button to select two lines for measuring the angle. The selected lines are displayed as white lines. The angle is displayed as white 3D text between the selected lines, once two lines have been selected. The line selection can be cleared using the left stylus button.

At any stage, the annotations or measurements can be picked and moved using the front stylus button. They maintain their relative position to the 3D model when the model is manipulated. They also scale with the model. All the stylus interaction modes and button mappings are shown in Table 1.

If the arbitrary plane is enabled while placing annotations or measurements, they will automatically snap to the slice plane, if they are within the distance of 1 cm from it. This makes it easy for the user to place measurements on a slice plane, using the 3D stylus input.

Annotations such as 3D points are shown in the 2D interface on the corresponding slice; however 3D measurements such as 3D lines are not shown in the 2D interface. The user can choose to interact with the 2D or 3D interface, or a combination of both in order to perform a diagnosis task as seen in Fig. 1.

Interaction Tools

Clinical information, such as the present symptoms and prior medical conditions, is used by the radiologist to rank the likelihood of various possible diagnoses when interpreting the scans.

Such information may also indicate the need to obtain diagnostic measurements, in which case the measurement task would typically comprise three steps: inspection, annotation (optional), and obtaining the measurement itself.

For our hybrid interface, the DICOM data loading and the 2D user interface, including the orthogonal views were part of the MARS Vision framework. We developed the arbitrary view, and the annotation and measurement tools, specifically for the purpose of testing our hybrid interface.


The first step involves the visual inspection of slices from the CT dataset to find the region of interest (ROI) within a slice image. The user can use a set of tools that we refer to as the “2D slice manipulation tools” to perform the inspection. These tools include controls that not only allow the user to pan, scale or scroll through the slices, but also adjust the gray-scale window width and level to optimally display the slice. These tools also provide controls for rotating the slice planes. The inspection using our hybrid system additionally allows the user to use the 3D stylus to directly manipulate and re-orient a slice, to perform the inspection.


Annotation tools are used to either annotate an ROI or an entire slice. Annotations contain text that provides additional information regarding the ROI or slice. Annotations can be used by radiologists to share findings. Annotations can also be used as a prerequisite for measurements. Typical annotation tools include point and line annotations as well as controls for annotating various ROIs within a slice or the entire slice. We only support point annotations in our system (see Fig. 8).
Fig. 8

2D interface demonstrating the annotation and measurement tools used for a scoliosis angle measurement task on the arbitrary slice


Measurement tools are used to quantify various aspects of CT datasets. These measurements often include selecting a finite ROI on a slice using shapes such as rectangles, circles, or ellipses. The tool then provides min, max, and average values. The tool can also be used to measure the length, or area of certain parts of the slice. Measurements often form the basis for diagnosis reports. We have implemented most of the measurement tools for our 2D interface (see Fig. 8), along with line and angle measurement tools for our 3D interface (see Fig. 4).

Hybrid Interaction

Hybrid interaction can be performed in two ways, serial and parallel. In the serial approach, the tasks are performed using the 2D and 3D components independently in a sequential manner. The diagnosis task can be divided into multiple steps, each of which can be performed using 2D or 3D components alone. Since both the components are synchronized with each other, the user can proceed with either interface for the next step.

In the parallel approach, both the 2D and 3D components can be used together at the same time to perform a single step in the diagnosis task. For example, a user can use the stylus to manipulate the arbitrary plane, while looking at the slice content on the 2D interface. The user can also choose to scroll through a slice using the mouse while looking at the 3D model to understand its relative position. Ambidextrous users can use the mouse and stylus together to manipulate the slice and the 3D model simultaneously.

Both approaches are supported in our system, however we only test the serial approach in our evaluation, as it requires considerably less training for users who have prior experience with the 2D component. Since the serial approach uses only mouse or stylus input at any given time, it can be easier and faster to learn the system.

We believe that the hybrid interaction method can have considerable advantages. The 2D component can be used for high precision tasks such as measuring line lengths, angles between lines, or marking 2D ROIs. Data inspection can also be performed in higher resolution since it is possible to view the slice information with a one to one mapping between the slice data and pixels. This is not ideal using the 3D component since slice information is textured on a 3D plane, which can appear distorted due to a non-optimum viewpoint when viewing it in the 3D view. Additionally, the circular polarization of the stereoscopic screen also reduces image brightness making it harder to view the slice information in the 3D view.

The 3D component can be used to gain a quick overview of the anatomy by manipulating the displayed anatomical object. It can also be used for tasks such as slice plane orientation, and 3D line and 3D ROI measurements. The speed/accuracy trade-off will depend heavily on the type of diagnosis task. The key factors would be the flow of actions in performing a particular diagnosis task, the level of synchronization and the user’s ability to shift focus between the 2D and 3D components.


The goal of this evaluation was to determine the effectiveness of our hybrid interface for a specialized radiological diagnosis task, compared to a more traditional 2D interface. We wished to test the effect of providing an easier means for rotating the arbitrary plane, on task performance and accuracy, compared to using the 2D mouse. We achieved this by comparing the traditional 2D slice manipulation tools with our hybrid interaction tools. We also chose a specialized task that required arbitrary slice rotation.

Since our hybrid interface is a combination of 2D and 3D components, we also included a 3D only interface to eliminate the possibility of the observed difference being due to the 3D component alone. We also implemented 3D measurement tools, specifically for the 3D only interface. So we evaluated three interface conditions: 2D, 3D, and hybrid.


The key research question was to determine if our hybrid interface provides any improvements over the existing 2D interface in terms of task performance and accuracy. Furthermore, we wanted to study how the user’s prior experience with the 2D interface influenced their task performance with our hybrid interface. Hence, the experiment was set up as a mixed within/between study in which the interface (2D, 3D, hybrid) was the within factor, and experience (student, resident) was the between factor. We obtained human ethics approval for our user study from University of Canterbury Human Ethics Committee (Ref# HEC 2016/35/LR-PS) [53].


For the evaluation, we chose two groups of participants: fourth-year medical students, and radiology residents (physicians training to become radiologists). The fourth-year medical students have the necessary medical background to perform the diagnosis tasks but have little to no experience using any diagnosis tools. Residents have at least one or two years experience using a 2D only interface; using it daily for various diagnosis tasks.

Since our hybrid interface uses a stereoscopic 3D display with 3D stylus interaction, we only chose participants with binocular or stereo vision, who had no hand related disabilities that would prevent them from using the stylus input. We had 31 participants for the evaluation. The student group consisted of 21 participants (11 male and 10 female) with an average age of 22, ranging from 21 to 27. The resident group consisted of 10 participants (6 male and 4 female) with an average age of 32, ranging from 28 to 38.


Most diagnosis tasks are performed by exploring one or more of the three anatomical planes, namely sagittal, coronal, and transverse planes. The procedure for diagnostic radiology widely varies for each diagnosis task. The conventional radiological workstation software allows exploration of 2D slices for diagnosis. Radiologists receive intense training on how to mentally visualize three-dimensional anatomical objects from these slices, and they require a lot of experience to do this task well [54].

Creating a 3D mental model helps them associate the location of each slice with its position inside the anatomical object. They find and examine the interesting slices, identify anomalies or features within these slices and report their findings. Although it is possible to render 3D anatomical objects from 2D slice data (for example volume rendering [55]), radiologists seldom rely on it. They find it easier and faster to diagnose using only the 2D slices.

We had two major criteria for selecting an evaluation task. Firstly, we wanted the task to have an element of 3D rotation for testing the effectiveness of our hybrid interface. Secondly, we wanted the task to be easy enough for medical students to perform, without much training.

After consultation with experienced radiologists, we chose to use a real diagnosis task, namely obtaining a scoliosis angle measurement task from abdominal CT scans. This task consisted of three simple steps: correcting the coronal slice orientation, annotating the spinal column, and measuring the scoliosis angle. Hence, the task could be performed by anyone with a basic knowledge of human anatomy.

We only chose abdominal CT scans where the patient’s spinal column was not parallel to the coronal plane, as this was a very common scenario for such scans. Hence, the diagnosis task involved 3D manipulation of the coronal plane to correct its orientation. There were three similar, but different, datasets used for the experiment.


Each participant first received a 5-min interface demonstration from an instructor followed by another 5-min practice period. During the practice period, the participant was allowed to perform the diagnosis task with a practice dataset and was free to ask questions regarding the task or interface. Later, the participant performed the actual diagnosis task on a different data.

The scoliosis task consisted of three steps. The first step was is to adjust the orientation of the arbitrary 2D (coronal) slice to optimally display the spine. The second step was to examine the coronal slice and annotate the vertebrae. Finally, the user drew two lines between two sets of vertebral discs, and measured the scoliosis angle between them. This procedure was repeated for each interface condition. The order of the conditions, and the use of datasets were both counterbalanced using a balanced latin square method [56].

The keyboard was used to label the annotations in all conditions. The 2D condition involved a 2D mouse, while the 3D condition used the 3D stylus. The hybrid condition was a mixture of both, utilizing the 3D stylus input for the first step, and mouse input for the rest of the task.

After each diagnosis task, participants were required to complete several questionnaires (see the “Measures” section). At the end of the experiment, participants were asked to comment on their experience in a semi-structured interview.

To better control the environment of the task, the window/level settings to make the bones clearly visible within the slice were pre-set. This guaranteed the same initial conditions for the second step. In addition, we also chose task datasets that required the same set of vertebral discs to be measured for the scoliosis angle. This was communicated to the participants prior to starting each diagnosis task. Participants were also given reference cards showing a labeled diagram of a healthy spinal column, and illustrating the interface controls for all conditions.


Prior to the experiment, the scoliosis angle was measured by an expert radiologist using each of the three interface conditions. This was repeated three times with every interface, for each dataset. We observed only a very minor difference of about 0.5 over repeated angle measurements from the expert. An average of the measured angle was then recorded as the best solution for each dataset, to establish a baseline for comparison.

The effectiveness of our hybrid system can be quantified with task completion time and accuracy. For each task, a snapshot of the labeled vertebrae, the completion time, and the resulting scoliosis angle were recorded. The snapshot was used to verify the correct annotation of the spinal column.

The measured angle was later compared with the base line measurement provided by an expert, to establish an accuracy measure for each diagnosis task. The absolute difference in the angle was computed as the absolute error for comparison.

Additionally, since the 3D condition involved multiple mode changes for performing the diagnosis task, we also recorded the number of mode changes for each participant.

Another key measure is the usability of the system, which was measured by the System Usability Scale (SUS) questionnaire [57]. We also chose to measure physical and mental task load using the NASA TLX [58] questionnaires. We only used the physical and mental task load scales from the NASA TLX questionnaire for our study.


The key question we wished to answer was whether our 2D + 3D hybrid interface was more effective than the existing 2D interface, or 3D interface, in a radiological diagnosis task. To answer this question we conducted a two-way mixed ANOVA with two factors: interface condition (2D, 3D, hybrid) and experience (student, resident) on the resulting experimental data with completion time, absolute error, SUS score, Physical task load and Mental task load as the dependent variables. We will refer to our interface condition as an interface for the remainder of this section.

There were two significant outliers in the data, as assessed byinspection of a boxplot for values greater than 3 box-lengths from the edge of the box. The cause for these outliers was a hardware malfunction during the experiment. Therefore, both these outliers were removed from the analysis.

The data was normally distributed, as assessed by Shapiro-Wilk’s test of normality (p > 0.05). There was homogeneity of variance (p > 0.05) and covariances (p > 0.05), as assessed by Levene’s test of homogeneity of variances and Box’s M test, respectively.

Mauchly’s test of sphericity indicated that the assumption of sphericity was met for the two-way interaction for completion time (χ 2(2) = 1.238, p = 0.538), absolute error (χ 2(2) = 1.324, p = 0.516), physical task load (χ 2(2) = 0.262, p = 0.877), and mental task load (χ 2(2) = 3.394, p = 0.183); however, it was violated for SUS score (χ 2(2) = 20.934, p < 0.0005). The Greenhouse-Geisser correction was used for SUS score (𝜖 < 0.75) [59].

There was a statistically significant interaction between the interface and experience on completion time (F(2,54) = 37.835, p < 0.0005, partial η 2 = 0.584), absolute error (F(2,54) = 4.416, p = 0.017, partial η 2 = 0.141), SUS score (F(1.288,34.772) = 13.604, p < 0.0005, partial η 2 = 0.335, 𝜖 = 0.644), and physical task load (F(2,54) = 5.564, p = 0.006, partial η 2 = 0.171), but the interaction was not statistically significant for mental task load (F(2,54) = 0.053, p = 0.948, partial η 2 = 0.002).

For the mental task load, since the interaction was not statistically significant, we investigated main effects, but there was no statistically significant difference in mental task load between interfaces (p = 0.089) or groups (p = 0.161). For other variables, since statistically significant interaction implies that the impact of the interface is dependent on the group, we have not investigated main effects directly. Instead, we investigated the simple main effects of interface for each group individually and vice-versa, which are reported in two parts.

The first part consists of the effects of each interface on experience groups for the dependent variables. This is done by first splitting the data into two experience groups: students and residents, followed by running one-way repeated measures ANOVA on each group with the interface as the repeating factor. The mean (M), standard error (SE), and p value are reported for each combination of interfaces.

The second part consists of effects of each experience group on the interfaces for the dependent variables. This is done by running independent samples t test on the dependent variables for all interfaces with experience as the grouping variable with two defined groups: students and residents. The results are reported as mean ± standard error (M ± S E), followed by the t value and p value for the mean difference. We also compute and report the effect size (d) for the t test using Cohen’s d [60].

Effects of Interface and Group on Completion Time

For the student group, the task completion time was statistically significantly higher using 2D compared to 3D (M = 24.21, S E = 7.91 s e c o n d s, p = 0.020), 2D compared to hybrid (M = 51.26, S E = 8.77 s e c o n d s, p < 0.0005), and 3D compared to hybrid (M = 27.05, S E = 9.92 s e c o n d s, p = 0.042). For the residents, the task completion time was statistically significantly lower using 2D compared to 3D (M = 113.3, S E = 16.27 s e c o n d s, p < 0.0005), and 2D compared to hybrid (M = 44.7, S E = 11.67 s e c o n d s, p = 0.012). However, task completion time was statistically significantly higher in 3D compared to hybrid (M = 68.6, S E = 15.68seconds, p = 0.005) (Fig. 9).
Fig. 9

Mean completion time

The completion time of students, when compared to residents, was statistically significantly higher using the 2D interface (26 ± 10.15 s e c o n d s, t(27) = 2.563, p = 0.016, d = 1.0), but was statistically significantly lower using the 3D interface (111.51 ± 17.38 s e c o n d s, t(27) = 6.415, p < 0.0005, d = 2.51), and hybrid interface (69.96 ± 11.19 s e c o n d s, t(27) = 6.255, p < 0.0005, d = 2.44) (Fig. 9).

Effects of Interface and Group on Absolute Error

For the student group, the absolute error was statistically significantly higher using 2D compared to 3D (M = 4.29, S E = 0.91, p = 0.001) and 2D compared to hybrid (M = 4.789, S E = 0.86, p < 0.0005) but the absolute error was not statistically significantly different between using 3D and hybrid interfaces (M = 0.502, S E = 0.743, p = 1.0). For the resident group, the absolute error was statistically significantly higher using 2D compared to hybrid (M = 1.97, S E = 0.50, p = 0.01), but the absolute error was not statistically significantly different between 2D compared to 3D (M = 0.91, S E = 0.47, p = 0.257) and 3D compared to hybrid (M = 1.06, S E = 0.44, p = 0.117) (Fig. 10).
Fig. 10

Mean absolute error

The absolute error of students, when compared to residents, was statistically significantly higher using the 2D interface (3.15 ± 0.70, t(25.7) = 4.494, p = 0.001, d = 1.38). There was no statistically significant difference in absolute error between students and residents using the 3D interface (− 0.22 ± 0.7, t(27) = −0.320, p = 0.751, d = 0.12), and the hybrid interface (0.34 ± 0.51, t(23.901) = 0.657, p = 0.517, d = 0.2) (Fig. 10).

Effects of Interface and Group on SUS Score

For the student group, the SUS score was statistically significantly lower for 2D compared to 3D (M = 14.08, S E = 3.87, p = 0.006) and for 2D compared to hybrid (M = 13.29, S E = 4.45, p = 0.024) but the SUS score was not statistically significantly different between 3D and hybrid interfaces (M = 0.79, S E = 1.47, p = 1.0). For the resident group, the SUS score was statistically significantly higher for 2D compared to 3D (M = 11.8, S E = 3.25, p = 0.016) and for 2D compared to hybrid (M = 6.8, S E = 1.79, p = 0.013) but the SUS score was not statistically significantly different between 3D and hybrid interfaces (M = 5.0, S E = 2.44, p = 0.213) (Fig. 11).
Fig. 11

Mean SUS score: higher score represents better usability

The SUS score of students, when compared to residents, was statistically significantly lower for the 2D interface (8.83 ± 4.23, t(27) = 2.085, p = 0.047, d = 0.82), but was statistically significantly higher for the 3D interface (17.05 ± 3.2, t(27) = 5.336, p < 0.0005, d = 2.08), and hybrid interface (11.26 ± 3.56, t(27) = 3.166, p = 0.004, d = 1.24) (Fig. 11).

Effects of Interface and Group on Physical Task Load

For the student group, the physical load was statistically significantly lower for the 2D interface compared to 3D (M = 10.0, S E = 1.83, p < 0.0005) and for 2D compared to hybrid (M = 6.05, S E = 2.25, p = 0.045) but the physical task load was not statistically significantly different between 3D and hybrid interfaces (M = 3.947, S E = 2.08, p = 0.221). For the residents group, the physical task load was statistically significantly lower for the 2D interface compared to 3D (M = 21.0, S E = 3.06, p < 0.0005) and for 2D compared to hybrid (M = 8.5, S E = 2.59, p = 0.028) but the physical task load was statistically significantly higher for the 3D interface compared to hybrid (M = 12.5, S E = 2.5, p = 0.002) (Fig. 12).
Fig. 12

Mean physical task load

The physical task load for students, when compared to residents, was statistically significantly lower for the 3D interface (12.18 ± 2.71, t(27) = 4.504, p < 0.0005, d = 1.76). There was no statistically significant difference in physical task load between students and residents for the 2D (1.18 ± 2.13, t(27) = 0.555, p = 0.583, d = 0.22), and hybrid (3.63 ± 2.92, t(27) = 1.244, p = 0.224, d = 0.49) interfaces (Fig. 12).

Mode Changes with the 3D Interface

We compared the average mode changes using the 3D interface between students(M = 4.58, S D = 1.07) and residents(M = 7.5, S D = 3.28) by running the independent samples t-test. The number of mode changes by residents was statistically significantly higher than students (2.92 ± 0.81, t(27) = 3.590, p = 0.001, d = 1.40).

Age Correlations with 3D Interface

We found a strong positive correlation between age and completion time using the 3D interface (r(29) = 0.715, p < 0.0005). We also found a strong negative correlation between age and SUS score for the 3D interface (r(29) = −0.623, p < 0.0005). However, there was no statistically significant correlation between age and task accuracy (r(29) = −0.027, p = 0.889).

Participant Feedback from Interview

All participants appreciated the availability of 3D model, as it offered a quick and complete view of the patient’s anatomy. Most medical students reported feeling more confident while performing the diagnosis task in the 3D and hybrid conditions as the 3D model helped them orient the slice quickly. They also reported that 3D rotation of the arbitrary plane using the 2D mouse was difficult and that the guidelines on the other orthogonal planes were hard to follow and confusing.

The residents reported feeling more confident in the 2D and hybrid conditions, as they found it easy to use the 2D slice guidelines to verify the integrity of their slice orientation. All of them reported difficulty performing precise annotations and measurements with the stylus in the 3D condition. They suggested that some form of stabilization, or scaling, was essential for using the stylus.

Most participants felt that the mouse was more accurate for annotation and measurement steps. All participants felt that learning to use the stylus was not only very natural but straightforward and easy. They reported finding it difficult to annotate and measure using the 3D interface due to unsteady hands while using the stylus. A few participants mentioned that the limited virtual stylus length forced them to reach further, and thus made them experience more hand fatigue.


The stylus seemed very intuitive as no participant had any difficulty maneuvering the stylus. In the 3D condition, we observed that some participants found it difficult to perform precise tasks such as annotations and measurements using the stylus. To improve hand stability, most participants rested their elbow on the table, and some participants tried using their non-dominant hand to support their dominant hand.

All participants seemed to enjoy the stylus input and stereoscopic display. Near the end of their practice session, they often reloaded the dataset and explored the anatomical features of the 3D model using the stylus, in 3D and hybrid conditions. While in the 2D condition, they would proceed to start the evaluation task. Some participants even requested additional time at the end of their experiment, to further explore the 3D model.

In the 2D condition, medical students seemed confused by the slice guidelines on other orthogonal planes. They found it difficult to make precise 3D rotations using these guidelines. Medical students appeared more comfortable using the 3D interface compared to residents. The resident group seemed to have difficulty remembering the location and functions of 3D buttons in the 3D interface. They referred to the 3D control reference card a lot more frequently compared to medical students.

Discussion and Conclusion

Each participant only received 5 min of training followed by five minutes of practice time to familiarize themselves with the interface. Despite this, they were able to complete the task in less than five minutes with any interface. This showed that not much training was required to learn the interface and tools necessary for performing the diagnosis task.

The fastest condition for students was the hybrid interface followed by the 3D and 2D interfaces. They were also more accurate with the hybrid and 3D interfaces compared to 2D. The medical students’ slow and inaccurate performance with the 2D interface could probably be attributed to their inexperience with 2D tools, and the difficulty of performing 3D rotation of the slice plane using the 2D mouse input.

Using the hybrid interface, students were able to achieve the same performance as the more experienced users. Since they had no prior experience with any interface, it shows that they were not only able to learn our hybrid interface quickly, but also to use it very efficiently. We believe this has major implications for improving diagnostic radiology training. While their accuracy using the 2D interface was low, using our hybrid interface, their accuracy improved significantly and was even comparable to that of the residents.

The fastest condition for residents was the 2D interface followed by the hybrid and 3D interfaces. However, much to our surprise, the residents were most accurate using the hybrid interface followed by 3D and 2D interfaces. Since the residents had been using the 2D interface daily for over a year, it is no surprise that they were fastest with the 2D interface and significantly outperformed the students with it. Their expertise with the 2D interface made them faster with our hybrid interface as well, since annotation and measurement were still performed using the 2D tools.

The lower accuracy of residents using the 2D interface despite their prior experience could be attributed to the difficulty in orienting the arbitrary slice using the 2D mouse input. Such difficulty can result in a habit of considering measurements with potentially lower accuracy, as acceptable.

By providing an easier way to rotate the arbitrary slice in our hybrid interface, despite the habit of “rough measurements,” we observe that the accuracy has increased. This is a significant finding. However, it should be noted that the cause could also be the novelty of our hybrid interface, leading to more attention from the residents during the diagnosis task. A future study looking at the use of our interface over multiple sessions could help identify the true cause of this effect.

We observed that residents found it difficult to cope with the different task workflow and relied more heavily on the interface reference cards. Our results show that using the 3D interface, on average, residents made significantly more mode changes than the medical students. This indicated that they found it difficult to follow the task workflow in the 3D condition. The residents did not have this problem in the hybrid condition, as it did not involve any 3D mode changes.

The 3D stylus had a one to one mapping of hand motion. While this was very natural for exploring the 3D model, precise movements were difficult. Unsteady hands made it even harder to perform precise interactions. This explains why participants experienced difficulties with annotation and measurements in the 3D condition. The stylus interaction also demanded higher physical hand movement compared to the 2D mouse. This explains the higher physical task load score for the 3D and hybrid interfaces.

We found a strong positive correlation between age and task completion time, with older participants taking longer to complete the task in the 3D condition. This behavior is similar to that observed by Zudilova-Seinstra et al. [61] in their study, although their evaluation task was quite different. We also found a strong negative correlation between age and the 3D SUS score, with older users rating the 3D interface lower. However, their accuracy scores were not affected by age.

The resident group was the older user group, with age ranging from 28 to 38 years (“Participants” section). Hence, we believe that the observed correlation is a result of their prior extensive experience with Inteleviewer’s 2D interface. It is hard to unlearn certain behaviors, that are similar in context but differ in interaction.

Feedback from the subjects in the unstructured interview, showed that most participants preferred the hybrid interface among the three conditions, since it combined the 3D model (showing the patient anatomy), the stylus input for 3D slice manipulation, the synchronized 2D view for verification, and the familiar and precise 2D tools (for annotations and measurements).

The medical students, in particular, appreciated having the 3D model, since it helped them better understand the patient’s anatomy, compared to the 2D interface. This is likely the reason why they gave the lowest SUS score for the 2D interface and a relatively high SUS score for the 3D and hybrid interfaces despite the stylus precision issues. The residents gave the lowest SUS score to the 3D interface due to the change in task workflow, and stylus precision problems for the annotations and measurements. Their SUS score for the 2D interface was higher, but this was expected due to their familiarity with the system.

A baseline SUS score of 68 is often used to determine major usability issues with a user interface [62]. All the SUS scores we obtained were higher than this baseline score, within the margin of error. This shows that there were no major usability issues with any interface.

The 3D model was intended to give an overview of the anatomy for inexperienced users so that they could quickly and easily understand the position and orientation of a slice, relative to the entire scan. Although we used a mesh for visualizing the anatomy, other volume rendering techniques could be used to better represent soft tissues.

The stereoscopic 3D is a key component of our 3D interface. It was required to simulate the holographic 3D experience by rendering the 3D model in the negative parallax region. We made sure the scene contrast, eye separation, and scene depth were within the optimal range [63]. Since the maximum task completion time did not exceed 5 minutes, we believe that it is less likely that users experienced any ill effects from stereoscopic 3D. We did not get any feedback from the users, that would suggest otherwise.


There were a limited number of radiology residents, hence the sample size for the resident group was relatively small. In the 3D condition, the lack of stylus stabilization, and the fixed virtual stylus length appeared to most negatively impacted task performance and contributed to increased fatigue.

For our prototype setup, a standard 2D screen was used for the 2D interface. We did not use a specialized 12-bit gray-scale radiology calibrated screen. Since our evaluation task only involved looking at bones, we believe that our standard 2D screen had sufficient contrast to perform the task. In the 2D interface, the users were required to hold the [shift] key in order to rotate the slice. This might have resulted in a minor disadvantage for the 2D condition.

In diagnostic radiology, the need for 3D measurements is rare compared to 2D measurements. Hence, tasks that require 3D measurements are perceived as very specialized. Difficulty in performing 3D measurements can lead to simplified surrogate measurements being obtained. For example, abdominal aortic aneurysm maximum diameter measurements are performed horizontally in the transverse plane rather than representing the true maximum aneurysm diameter, introducing a small potential measurement error. Thus by making 3D manipulation and measurement easy, we can avoid the need for such surrogate measurements in diagnostic radiology.

The radiology diagnosis task used for our evaluation is a specialized case. We chose a diagnosis task that required some level of 3D manipulation to fairly evaluate our system, while still being easy enough for medical students. Only a limited number of diagnostic radiology tasks require 3D manipulation.

Fatigue from long diagnosis tasks can lead to mistakes in diagnosis. However, we believe that this might not be the case in our study, since the maximum task completion time for any of the interface conditions, did not exceed 5 min. Hence, the total task time for all three tasks was under 15 min. The effects of fatigue can be explored in a future study, by choosing a relatively complex diagnosis task and measuring the fatigue of subjects, using the full NASA TLX survey [58].

One of the issues mentioned was the mapping of buttons on the 3D stylus. While some participants were satisfied with the way stylus buttons were mapped, others preferred that the left and right buttons on the stylus to be mapped similar to a mouse, even though this was not ergonomic. The left button on the stylus was harder to reach with the index finger while holding the stylus. Despite this, participants preferred the most common functions to be mapped to the left stylus button, as this would be similar to the traditional mouse (where most functions are performed with the left mouse button).

Future Work

The 3D stylus precision can be improved by using some form of scaled manipulation such as PRISM [64] introduced by Frees et al. The speech input could be explored to improve the mode switching. Interaction with the 3D components could be improved by exploring other forms of 3D input such as freehand input.

Further studies can be run with different diagnostic radiology tasks, possibly with a larger pool of residents to observe differences in performance. Additional experimental data could be captured such as hand and head tracking to study user behavior in more detail. More subjective feedback could be gathered for future studies about the individual components within the hybrid interface. It would be interesting to explore effects (especially fatigue) while using the interface for a longer period of time.

Users would be expected to perform even better with additional training with the hybrid interface. It would be interesting to explore long-term learning effects.


We introduced a hybrid interface for improving 3D manipulation in radiological diagnosis. This interface combined the zSpace stereoscopic system with a 2D display, and mouse and keyboard input. We also presented an evaluation involving a user study diagnosing scoliosis for three conditions with two groups. The study results show that the hybrid interface allows users to achieve higher accuracy in tasks involving 3D rotation of anatomical planes, compared to the traditional 2D interface.

Users were able to learn all the interfaces (2D, 3D, and hybrid) after a 5-min training session and were later able to perform the scoliosis diagnosis task. Compared to the 2D interface, the novice users were able to perform the task faster and with a significant increase in accuracy using our hybrid interface. The experienced users were slightly slower using our hybrid interface, but their diagnosis was more accurate.



We would like to thank all the radiology residents from the Christchurch Hospital and fourth year medical students from the University of Otago who participated in the evaluation of our system. This work was funded by the MARS Bioimaging [65].


  1. 1.
    Chen M, Mountford SJ, Sellen A. A study in interactive 3-D rotation using 2-D control devices. ACM SIGGRAPH Computer Graphics 1988;22(4):121–129.CrossRefGoogle Scholar
  2. 2.
    Hinckley K, Tullio J, Pausch R, Proffitt D, Kassell N. Usability analysis of 3D rotation techniques. Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology. ACM; 1997. p. 1–10.Google Scholar
  3. 3.
    Bowman D, Kruijff E, LaViola Jr JJ, Poupyrev IP: 3D User interfaces: theory and practice, CourseSmart eTextbook. Addison-Wesley, 2004.Google Scholar
  4. 4.
    Graves MJ, Black RT, Lomas DJ. Constrained surface controllers for three-dimensional image data reformatting. Radiology 2009;252(1):218–224.CrossRefPubMedGoogle Scholar
  5. 5.
    zSpace zSpace, inc. Accessed: 2017-01-09.
  6. 6.
    Emerson T, Prothero JD, Weghorst SJ: Medicine and virtual reality: a guide to the literature (medVR). Human Interface Technology Laboratory, 1994.Google Scholar
  7. 7.
    Ayache N. Medical computer vision, virtual reality and robotics. Image and Vision Computing 1995;13(4): 295–313.CrossRefGoogle Scholar
  8. 8.
    Székely G, Satava RM. Virtual reality in medicine. BMJ: British Medical Journal 1999;319(7220):1305.CrossRefPubMedPubMedCentralGoogle Scholar
  9. 9.
    Gallo L, Minutolo A, De Pietro G. A user interface for VR-ready 3D medical imaging by off-the-shelf input devices. Computers in Biology and Medicine 2010;40(3):350–358.CrossRefPubMedGoogle Scholar
  10. 10.
    Hand C. A survey of 3D interaction techniques. Computer graphics forum, vol 16 no 5. Wiley; 1997. p. 269–281.Google Scholar
  11. 11.
    Shoemake K. Arcball: a user interface for specifying three-dimensional orientation using a mouse. Graphics Interface; 1992. p. 151–156.Google Scholar
  12. 12.
    Henriksen K, Sporring J, Hornbæk K. Virtual trackballs revisited. IEEE Trans Visual Comput Graphics 2004;10(2):206–216.CrossRefGoogle Scholar
  13. 13.
    Bade R, Ritter F, Preim B. Usability comparison of mouse-based interaction techniques for predictable 3D rotation. International Symposium on Smart Graphics. Springer; 2005. p. 138–150.Google Scholar
  14. 14.
    Hinckley K, Pausch R, Goble JC, Kassell NF. Passive real-world interface props for neurosurgical visualization. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Celebrating Interdependence. ACM; 1994. p. 452–458.Google Scholar
  15. 15.
    Frohlich B, Plate J, Wind J, Wesche G, Gobel M. Cubic-mouse-based interaction in virtual environments. IEEE Comput Graphics Appl 2000;20(4):12–15.CrossRefGoogle Scholar
  16. 16.
    Gallo L, De Pietro G, Marra I. 3D Interaction with volumetric medical data: experiencing the wiimote. Proceedings of the 1st International Conference on Ambient Media and Systems. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering); 2008. p. 14.Google Scholar
  17. 17.
    Mauser S, Burgert O. Touch-free, gesture-based control of medical devices and software based on the leap motion controller. Stud Health Technol Inform 2014;196:265–270.PubMedGoogle Scholar
  18. 18.
    Gallo L, Placitelli AP, Ciampi M. Controller-free exploration of medical image data: Experiencing the Kinect. 2011 24th International Symposium on Computer-based medical systems (CBMS). IEEE; 2011. p. 1–6.Google Scholar
  19. 19.
    Ruppert GCS, Reis LO, Amorim PHJ, de Moraes TF, da Silva JVL. Touchless gesture user interface for interactive image visualization in urological surgery. World journal of urology 2012;30(5):687–691.CrossRefPubMedGoogle Scholar
  20. 20.
    Balakrishnan R, Baudel T, Kurtenbach G, Fitzmaurice G. The Rockin’Mouse: integral 3D manipulation on a plane. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems. ACM; 1997. p. 311–318.Google Scholar
  21. 21.
    Dang NT, Tavanti M, Rankin I, Cooper M. A comparison of different input devices for a 3D environment. Int J Ind Ergon 2009;39(3):554–563.CrossRefGoogle Scholar
  22. 22.
    Zudilova-Seinstra EV, de Koning PJ, Suinesiaputra A, van Schooten BW, van der Geest RJ, Reiber JH, Sloot PM. Evaluation of 2D and 3D glove input applied to medical image analysis. Int J Hum Comput Stud 2010;68(6):355–369.CrossRefGoogle Scholar
  23. 23.
    Bérard F, Ip J, Benovoy M, El-Shimy D, Blum JR, Cooperstock JR. Did Minority Report get it wrong? Superiority of the mouse over 3D input devices in a 3D placement task. IFIP Conference on Human-Computer Interaction. Springer; 2009. p. 400–414.Google Scholar
  24. 24.
    Wang G, McGuffin MJ, Bérard F, Cooperstock JR. Pop-up depth views for improving 3D target acquisition. Proceedings of Graphics Interface 2011. Canadian Human-Computer Communications Society; 2011. p. 41–48.Google Scholar
  25. 25.
    Feiner S, Shamash A. Hybrid user interfaces: breeding virtually bigger interfaces for physically smaller computers. Proceedings of the 4th annual ACM symposium on User interface software and technology. ACM; 1991. p. 9–17.Google Scholar
  26. 26.
    Fitzmaurice GW, Zhai S, Chignell MH. Virtual reality for palmtop computers. ACM Trans Inf Syst (TOIS) 1993;11(3):197–218.CrossRefGoogle Scholar
  27. 27.
    Angus IG, Sowizral HA. Embedding the 2D interaction metaphor in a real 3D virtual environment. IS&T/SPIE’s Symposium on Electronic Imaging: Science & Technology. International Society for Optics and Photonics; 1995. p. 282–293.Google Scholar
  28. 28.
    Hachet M, Guitton P, Reuter P. The CAT for efficient 2D and 3D interaction as an alternative to mouse adaptations. Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM; 2003. p. 225–112.Google Scholar
  29. 29.
    Darken RP, Durost R. Mixed-dimension interaction in virtual environments. Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM; 2005. p. 38–45.Google Scholar
  30. 30.
    Wang J, Lindeman RW. Object impersonation: towards effective interaction in tablet-and HMD-based hybrid virtual environments. 2015 IEEE Virtual Reality (VR). IEEE; 2015. p. 111–118.Google Scholar
  31. 31.
    Wloka M. Interacting with virtual reality. Virtual Prototyping. Springer; 1995. p. 199–212.Google Scholar
  32. 32.
    Coninx K, Van Reeth F, Flerackers E. A hybrid 2D/3D user interface for immersive object modeling. Proceedings of Computer Graphics International, 1997. IEEE; 1997. p. 47–55.Google Scholar
  33. 33.
    Rekimoto J. Pick-and-drop: a direct manipulation technique for multiple computer environments. Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology. ACM; 1997. p. 31–39.Google Scholar
  34. 34.
    Ullmer B, Ishii H. The metaDESK: models and prototypes for tangible user interfaces. Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology. ACM; 1997. p. 223–232.Google Scholar
  35. 35.
    Riva G. Virtual reality for health care: the status of research. Cyberpsychol Behav 2002;5(3):219–225.CrossRefPubMedGoogle Scholar
  36. 36.
    Arvanitis TN. Virtual reality in medicine. Handbook of Research on Informatics in Healthcare and Biomedicine. IGI Global; 2006. p. 59–67.Google Scholar
  37. 37.
    Pensieri C, Pennacchini M. Virtual Reality in medicine. Handbook on 3D3C Platforms. Springer; 2016. p. 353–401.Google Scholar
  38. 38.
    Baumgärtner S, Ebert A, Deller M, Agne S. 2D meets 3D: a human-centered interface for visual data exploration. CHI’07 Extended Abstracts on Human Factors in Computing Systems. ACM; 2007. p. 2273–2278.Google Scholar
  39. 39.
    Bornik A, Beichel R, Kruijff E, Reitinger B, Schmalstieg D. A hybrid user interface for manipulation of volumetric medical data. IEEE Symposium on 3D User Interfaces, 3DUI 2006. IEEE; 2006. p. 29–36.Google Scholar
  40. 40.
    Teistler M, Breiman R, Lison T, Bott O, Pretschner D, Aziz A, Nowinski W. Simplifying the exploration of volumetric images: development of a 3D user interface for the radiologist’s workplace. J Digit Imaging 2008;21(1):2–12.CrossRefGoogle Scholar
  41. 41.
    Teistler M, Ampanozi G, Schweitzer W, Flach P, Thali M, Ebert L. Use of a low-cost three-dimensional gaming controller for forensic reconstruction of CT images. J Forensic Radiol Imaging 2016;7:10–13.CrossRefGoogle Scholar
  42. 42.
    Aamir R, Chernoglazov A, Bateman C, Butler A, Butler P, Anderson N, Bell S, Panta R, Healy J, Mohr J, et al. MARS Spectral molecular imaging of lamb tissue: data collection and image analysis. J Instrum 2014;9(02):P02005.CrossRefGoogle Scholar
  43. 43.
    Rajendran K, Walsh M, De Ruiter N, Chernoglazov A, Panta R, Butler A, Butler P, Bell S, Anderson N, Woodfield T, et al. Reducing beam hardening effects and metal artefacts in spectral CT using medipix3RX. J Instrum 2014;9(03):P03015.CrossRefGoogle Scholar
  44. 44.
    Rajendran K, Löbker C, Schon BS, Bateman CJ, Younis RA, de Ruiter NJ, Chernoglazov AI, Ramyar M, Hooper GJ, Butler AP, et al. Quantitative imaging of excised osteoarthritic cartilage using spectral CT. Eur Radiol 2017;27(1):384–392.CrossRefPubMedGoogle Scholar
  45. 45.
    DCMTK OFFIS DICOM toolkit. Accessed: 2017-01-12.
  46. 46.
    Qt API Th Qt Company. Accessed: 2017-01-12.
  47. 47.
    OpenGL API Khronos Group. Accessed: 2017-01-12.
  48. 48.
    CUDA API NVIDIA Corporation. Accessed: 2017-01-12.
  49. 49.
    Inteleviewer Intelerad Medical Systems Incorporated. Accessed: 2017-01-09.
  50. 50.
    Lorensen WE, Cline HE. Marching cubes: A high resolution 3D surface construction algorithm. ACM Siggraph Computer Graphics. ACM; 1987. p. 163–169.Google Scholar
  51. 51.
    Dai Y, Zheng J, Yang Y, Kuai D, Yang X: Volume-Rendering-Based Interactive 3D measurement for quantitative analysis of 3D medical images. Computational and mathematical methods in medicine, vol 2013, 2013.Google Scholar
  52. 52.
    Preim B, Tietjen C, Spindler W, Peitgen HO. Integration of measurement tools in medical 3D visualizations. Proceedings of the Conference on Visualization’02. IEEE Computer Society; 2002. p. 21–28.Google Scholar
  53. 53.
    Human Ethics Committee university of canterbury. Accessed: 2016-06-01.
  54. 54.
    Brant WE, Helms CA: Fundamentals of diagnostic radiology. Lippincott Williams & Wilkins, 2012.Google Scholar
  55. 55.
    Drebin RA, Carpenter L, Hanrahan P. Volume rendering. ACM Siggraph Computer Graphics, vol 22 no 4. ACM; 1988. p. 65–74.Google Scholar
  56. 56.
    Cox GM, Cochran W: Experimental designs. JSTOR, 1953.Google Scholar
  57. 57.
    Brooke J, et al. SUS-a quick and dirty usability scale. Usability Evaluation in Industry 1996;189(194):4–7.Google Scholar
  58. 58.
    Hart SG, Staveland LE. Development of nasa-tlx (task load index): results of empirical and theoretical research. Adv Psychol 1988;52:139–183.CrossRefGoogle Scholar
  59. 59.
    Maxwell SE, Delaney HD. Designing experiments and analyzing data: a model comparison perspective. Psychology Press, vol 1, 2004.Google Scholar
  60. 60.
    Hartung J, Knapp G, Sinha BK: Statistical meta-analysis with applications. Wiley, vol 738, 2011.Google Scholar
  61. 61.
    Zudilova-Seinstra E, van Schooten B, Suinesiaputra A, van der Geest R, van Dijk B, Reiber J, Sloot P. Exploring individual user differences in the 2D/3D interaction with medical image data. Virtual Reality 2010;14(2):105–118.CrossRefGoogle Scholar
  62. 62.
    Bangor A, Kortum P, Miller J. Determining what individual SUS scores mean: adding an adjective rating scale. J Usability Stud 2009;4(3):114–123.Google Scholar
  63. 63.
    Patterson RE: Human factors of stereoscopic 3D displays. 1270 Springer, 2015.Google Scholar
  64. 64.
    Frees S, Kessler GD. Precise and rapid interaction through scaled manipulation in immersive virtual environments. IEEE Proceedings of Virtual Reality, 2005. VR 2015. IEEE; 2005. p. 99–106.Google Scholar
  65. 65.
    MARS Bioimaging Ltd. Accessed: 2017-01-09.

Copyright information

© Society for Imaging Informatics in Medicine 2017

Authors and Affiliations

  • Veera Bhadra Harish Mandalika
    • 1
    • 4
    • 5
  • Alexander I. Chernoglazov
    • 1
    • 4
  • Mark Billinghurst
    • 2
  • Christoph Bartneck
    • 1
    • 5
  • Michael A. Hurrell
    • 3
  • Niels de Ruiter
    • 1
    • 3
    • 4
    • 5
  • Anthony P. H. Butler
    • 1
    • 3
    • 4
  • Philip H. Butler
    • 1
    • 4
  1. 1.University of CanterburyChristchurchNew Zealand
  2. 2.University of South AustraliaAdelaideAustralia
  3. 3.Division of Health SciencesUniversity of OtagoChristchurchNew Zealand
  4. 4.MARS Bioimaging Ltd.ChristchurchNew Zealand
  5. 5.HIT Lab NZChristchurchNew Zealand

Personalised recommendations