Keywords

1 Introduction

Virtual Reality (VR) has long been discussed for its potential to revolutionize business processes and activities ranging from video games to educational programs. Until recently, VR technology has been limited in its utility due to the cost and quality of video rendering, optics, and motion tracking technology. However, recent advancements in both software and hardware are vastly expanding the capabilities of VR programs. Several industries have taken notice and are attempting to integrate VR technology into their existing processes.

Engineering in particular stands to benefit from the introduction of VR. Engineers often construct and interact with complex three dimensional (3D) models using two dimensional (2D) computer-aided design (CAD) programs. VR offers the potential of a 3D interface to match the 3D models engineers work with, allowing them to interact directly with their work and removing the need to use complex mouse and keyboard controls simply to view and navigate.

Our team conducted several interviews with engineers and managers who have experience developing and reviewing traditional CAD models to determine what the limitations of such technologies are and how they could be made better. We also surveyed engineers for feedback about more specific challenges of using a traditional CAD software and what they would like to see in a 3D VR program. We found that viewing and selecting specific elements of objects, especially those within other objects, proved a challenge to many users; and that getting a sense of scale and layout requires years of experience with 2D software, forcing engineers to create costly 3D prototypes to fully communicate designs to nontechnical users.

Using these insights, we developed a prototype program using the Unity platform to be tested on the Oculus Rift VR device. We tested that prototype with several people with varying levels of experience in engineering. Our major takeaway was that while a VR interface is more intuitive than a 2D one when selecting elements, especially for new users, context is vital: making sure the user understands the environment, the object(s) to be controlled, and the controller itself is necessary for users to get the most out of VR.

The code base for our prototype can be found on Github: https://goo.gl/D8HaLQ.

1.1 Contributions

VR offers those without years of technical experience the ability to explore and understand complex three dimensional objects. This has the potential to revolutionize fields such as engineering, architecture, and healthcare by improving communication between technical experts and non-experts.

Our research shows evidence that VR offers a more intuitive and easier to control interface than 2D CAD programs, and our guidelines will help designers of VR systems create useful and usable applications.

2 Background

2.1 Review of Relevant Literature

Until recently, virtual reality (VR) technology has not been sophisticated enough to emulate many of the tasks that engineers and designers need to complete their work. Two-dimensional (2D) and three-dimensional (3D) CAD modeling software is in no imminent danger of being replaced by VR software, but industry experts agree that VR has potential to replace those systems in the next 10–20 years. As a result, most current VR software is mostly focused on enhancing existing processes rather than replacing them, as users familiarize themselves with the hardware and as more sophisticated software and hardware are developed.

As background for our study, we researched what current user interfaces (UIs) are popular in VR applications, particularly those in an industrial setting. To do so we reviewed white papers, scholarly articles, press releases, marketing materials, industry reports, and popular press articles. We also reviewed videos and presentations from conferences focused on virtual reality, where much of the discussion of UI development takes place. Due to time and resource constraints, we did not review individual patents, instead relying on papers and articles discussing their contents. Our goal is to present the research and work being done on UI design in VR, describe the current state of VR use in industrial settings, identify existing roadblocks to further development and adoption, and explain how those roadblocks are being overcome through innovative UI and technological development.

2.2 User Interface Design

The overarching goal in user interface (UI) design is to accurately and effortlessly translate a users intent into action. In VR, this means using gestures that feel intuitive and natural. Alger explains that in order to make VR technology available and used on a large scale, the barrier to adoption must be very small (Alger 2015a, b, c). Users who are not early adopters do not want to be dissuaded by complex gestures or motions to make the software work; rather, they should be able to instinctively understand how what they see works, as one does with a straw or a hammer. In addition, traditional user experience (UX) elements, such as text, images, graphics, and audio, must seamlessly integrate with whatever system is in place so the user does not feel disoriented.

VR UI designers have identified several potential frameworks for user interaction in a VR environment. Many people naturally reach out to touch VR objects, seeking haptic feedback (Alger 2015a, b, c). Even when they know this is not the way to interact with an object, the instinct is difficult to suppress. Other combinations of wands and cursors used in either hand have been tested but no definitive standard has yet been developed. Another interface that has been tested is a gaze-based interface whereby the user focuses his or her vision on a particular object, and either that focus or a button on the head-mounted device (HMD) performs an action on the object (Samsung 2014).

Another type of UI involves using a virtual keyboard to accept user input (Alger 2015a, b, c), while still others attempt to mimic more recent technology by using swipe, pinch, and scroll gestures (Samsung 2014). These have been developed in part to lessen the difficulty in adjusting to a completely new technology and environment, especially for less technologically-adept users (GDC 2017). Giving feedback, whether haptic, visual, or audible, is necessary to assure users they are performing the intended motion and encourage them to continue (Oculus 2015). Range of motion, vision, and ergonomics must also be taken into account when designing user interfaces, as users will not want to use a technology that is physically uncomfortable.

Designers must also figure out how to adjust 2D objects for a 3D environment. Mike Alger discussed needing to have a 2D screen within his 3D environment so users could, for example, read emails which are designed for 2D reading. At the 2014 Samsung Developer Conference, Alex Chu explained that users found 2D thumbnails easier to understand, even if translating the content into 3D thumbnails was more visually appealing. Depth, usually not an issue in 2D, plays a prominent role in 3D design as users can feel disoriented or even scared if objects do not appear to be correct. At the 2016 Google I/O conference, the Google Daydream team discussed using layers similar to those found in Adobe Photoshop to mimic the parallax effect, a phenomenon human brains use to perceive a change in perspective (Google Developers 2016).

2.3 Current Industrial Use

Virtual Reality. To date, most virtual reality applications in industrial settings have focused on three major areas: employee training, employee safety, and prototype/design review. Every company that has embraced VR for these purposes hopes to takes advantage of remote collaboration capabilities and take advantage of a cheaper, more efficient form of reviewing products and processes before beginning an expensive and time-consuming prototyping process. Most major engineering and manufacturing firms purchase technologies from companies that specialize in developing VR software, such as ESI Group and EON Reality, whose products can be tailored to a specific industry and use case.

An Invensys report from 2010 identified that the most challenging aspect of using VR in process industries is that users must maintain a focus on collaboration and learning while being limited to a single-user experience provided by the hardware, normally a headset (Invensys 2010). Additionally, users need to believe they are actually immersed in a virtual world in order to interact as they would in a real-world setting. However, due to technical limitations, it is neither possible nor desirable to make every object dynamic, so the program must track the state of every object in its memory and render any changes in real-time. Real-time rendering is both the most useful and among the most challenging aspects of VR in an industrial setting, according to experts.

Current software used in industrial settings focuses more on visualization and positioning to solve problems concerning things like safety, ergonomics, and design. For example, Ford uses its VR software to visualize the interior of its automobiles so designers and engineers can see the layout from a drivers perspective before manufacturing the first prototype (Forbes 2014). Ford’s VR program has been used to launch 100 prototype vehicles, leading to a 70% reduction in worker safety accidents and a 90% drop in ergonomics complaints (Martinez 2015). Other auto companies, such as Volkswagen and Jaguar Land Rover, have also integrated VR technology into their assembly lines in various capacities.

These renderings are also shared with executives who may offer their own input, but in this case the UI is more limited, allowing for viewing but not editing. Even those programs that allow for editing objects are used only for small changes, such as simple positioning or extruding, and are not used to build an object from scratch (PricewaterhouseCoopers 2016). These limitations are due more to current technological restrictions than practical use of the technology; Goldman Sachs predicts that CAD and CAM software are the most likely technologies to be disrupted by VR in industrial settings, estimating that 3.2 million users will regularly use the technology by 2025 (Goldman Sachs 2016).

Augmented Reality and Mixed Reality. Compared to VR, augmented reality (AR) and mixed reality (MR) are slightly more mature fields in the industrial sector. AR and MR generally do not have the same UI challenges that VR does simply because the technologies work a different way. In AR applications, users interact with real-world objects with some kind of virtual overlay; for example, using smart glasses to display a product’s information or quantity remaining in a warehouse as the user looks at the product, instead of having to find the product’s information on a mobile device or notebook (PricewaterhouseCoopers 2016). Other potential uses include allowing remote experts to see what a technician sees and give instruction from far away, or embedding temperature or motion sensors to detect problems in machines before they have reached a critical point and require a shutdown.

MR is a more complex technology that combines elements of virtual and augmented reality. A recent study by Accenture Technology described MR as a next-generation digital experience driven by the real-world presence of intelligent virtual objects, enabling people to interact with these objects within their real world field of view (Accenture Technology 2016). MR hardware typically makes use of three primary technologies: infrared to map physical surroundings, infrared to capture gestures by the user and others, and natural language processing for voice recognition. Machine learning and artificial intelligence algorithms then piece together a virtual world around the user that he or she can interact with. MR can offer some of the same benefits as VR without the UI difficulties. MR applications can be used for remote collaboration, training, or virtual prototypes of physical objects, similar to existing VR technologies. However, it remains to be seen if the benefits of MR, namely the ability to maintain spatial awareness and real-world interaction, outweigh the drawback of not being fully immersed in the virtual world.

2.4 Current Non-Industrial Use

The most popular use for VR on the market right now is entertainment, specifically video games. Creating and manipulating virtual worlds is not a new concept in video games; games such as World of Warcraft immerse the player in a virtual world complete with quests, characters, and objects and allow the player to communicate with other players. The premier user experience has led World of Warcraft to capture over 50% of the massively-multiplayer online role-playing game market for several consecutive years (The Journal of Technology Studies 2014). Video games typically use either the first-person or third-person perspective, with an avatar being used to represent the user as the user progresses in the game. Some VR UI designers have attempted to use avatars in a similar fashion, particularly those working in manufacturing settings, but this limits the functionality of the program (PricewaterhouseCoopers 2016). Several VR hardware devices have emulated traditional video game hardware, namely joysticks and buttons as inputs, because they are typically intuitive for an early adopting crowd.

Two other notable industries where VR and AR are being adopted, and where UI is a notable challenge discussed in literature, are the military and education fields. Militaries around the world are using VR applications to train both air and ground forces for things like ground training, collaboration, and field medicine. VR enables users to simulate battlefield environments and weapons usage without being exposed to live ammunition, and to learn from remote experts while being exposed to realistic situations. Obviously, the specifics of these programs are not widely known, but it is known that British troops stationed in Germany have used VR software in preparation for deployment to Afghanistan (Virtual Reality Society).

In the education field, VR has mostly been used to give students access to experiences they would not normally have, such as a substitute for field trips. The use of AR has been observed more; the Journal of Technology Studies recently studied the use of AR in education and found that AR could have many benefits for students, including increased engagement and comprehension, without the UI concerns that come with using VR (Antonioli et al. 2014). Furthermore, AR allows students to maintain awareness of their space so teachers dont have to spend as much effort supervising them. But while the technology was very beneficial, it required teachers either have a certain level of technological knowledge or receive specific training to use the technology, costing both time and money. From a UI perspective, the work with younger students proved that intuitive gestures and content can be easily understood by almost any audience, but there is much work to be done to determine which gestures are most intuitive for certain actions.

3 Testing Methods

3.1 Interviews

Our team conducted several interviews with other students and professionals who have experience using CAD software. Some of the interview subjects had worked as engineers in large corporations while others had more academic than professional experience. Subjects backgrounds were in a variety of industries, including architecture, food manufacturing, clothing manufacturing, and electrical engineering, but all had used CAD systems in some capacity and had an opinion on the strengths and weaknesses of CAD software. We attempted to get different perspectives from various kinds of users in our interviews, rather than just engineers, to better understand what role a VR system could play throughout the product development lifecycle.

Interview questions were focused on getting a better understanding of the subjects experience in the industry, their experience with designing models using CAD software, and understanding what problems they faced in building such models. We also asked questions focused on specific use-cases for VR, such as current methods of and potential for VR use in collaboration, feedback, and prototyping. Questions were not asked about any particular software, instead focusing on general descriptions of how tasks were completed using 2D modeling software.

Engineers. In total, we interviewed three subjects who are engineers, in addition to the survey of engineers and the managers who have engineering experience (both described in more detail in later sections). Each subject had experience both in a classroom setting and in a professional setting building CAD models. We distilled their feedback down to three major takeaways:

  1. 1.

    Each part is typically owned by one engineer, and it is rare for multiple people to work on a part in tandem. Typically, when working in a group, engineers are not doing live adjustments to a model; if they need to do that they do it separately on physical sketches. In a group setting, one person drives (controls the mouse, keyboard, and computer) while the others watch.

  2. 2.

    It is difficult to look within an object or see cross-sections. Typically the user must select a part and either hide it or make it transparent to see the parts behind it, which may take several steps. This also leaves the user vulnerable to missing certain parts and is more time consuming than engineers would prefer.

  3. 3.

    Making small alterations is difficult due to the precision needed, and sometimes a user can make an alteration (e.g. a small extrusion or angle change) without realizing it. This can have severe consequences if the user doesnt catch it early enough, but often there is no good way to see the error.

Artists and Architects. Interviews from two architects and two 3D animation artists revealed that their key pain points of currently available 3D modeling softwares are slow rendering times, version control, software stability, and portability between software variants.

Other notable feedback included:

  1. 1.

    Accuracy was more important to architects whereas visual aesthetics was more important to artists when modeling. However, once a model has been finalized, keeping the exact same ratios and measurements was also important to artists.

  2. 2.

    Architects and artists found working on 2D screens is not challenging, mostly because of the level of proficiency they have already achieved with 2D softwares. Some even found current softwares to be easy to use when accompanied by certain accessories like 3D mouse, Wacom tablet/pen, etc. However, both groups initially experienced difficulty using the softwares for the first few months.

  3. 3.

    Artists always work in 3D whereas architects mostly work in 2D and use 3D only when showing the model to clients or other stakeholders and to see it at the end of final modifications to verify the feel of the room.

Managers. Overall, five subjects were interviewed. Interviewees had managerial experience in industrial settings, including industries such as food manufacturing, clothing manufacturing, and medical devices. Each had some experience either building CAD models themselves or working on engineering teams to review and give feedback to such models. Here are three main points from the interviews:

  1. 1.

    Collaboration is not a barrier in the design and review process, as the currently employed methods (phone, email, in-person conversations) are effective. Additionally, some issues must be discussed in-person, such as which materials to use or to discuss more detailed issues of scale, which can be difficult to convey in a purely visual medium. Some issues exist, notably giving timely feedback when working on a deadline, but most subjects agreed that was a people issue more than a technology issue and would likely persist even using VR technology.

  2. 2.

    Scale and layout are issues with current software. It can be difficult to estimate scale in a purely visual medium, and a slight miscommunication can lead to problems down the road. Several subjects pointed out the challenges of designing the layout for a factory and the difficulties if spaces and distances are not accurately estimated.

  3. 3.

    Precision is extremely important. Current CAD software operate on a level of about 5/10000ths of an inch, and margins in many manufacturing outfits are extremely low, so the slightest error can cause a company to lose a lot of money.

3.2 Survey

After conducting user interviews, we theorized that element selection would be a pain point that could be addressed by VR and decided to conduct further user research. In addition to the user interviews we conducted with students and professionals, we conducted a survey of engineers and designers. The goal was to understand what specific problems regarding element selection engineers encounter when using 2D CAD models.

The survey was divided into three main sections: general information/background, experience and difficulties faced with element selection, and other information. Users were given ample opportunity to describe specific problems they faced, and several chose to do so. In total we received seven responses to the survey, with four of the users being engineers or managers of engineering teams.

From the survey responses we extracted three key takeaways related to element selection in 2D CAD models:

  1. 1.

    Element selection is very tedious. It can require frequent zooming in/out to get to the right magnification to select the correct element, and the user frequently has to rotate the point-of-view which causes disorientation when working with multiple or embedded objects.

  2. 2.

    The software would be better if it only allowed selection of one type of element at a time. Users felt that a good feature would be a sub-menu or similar interface that allowed them to pre-determine which type of element can be selected.

  3. 3.

    Multiple selection and embedded element selection would be useful features in VR. Both of these are difficult to do in 2D models. Due to the difficulty of rotating and zooming in/out to find the right item to select, users often accidentally select the wrong element or deselect everything, causing the user to start over and waste a lot of time.

4 Hypothesis

Following our interviews and survey, we decided to focus on element selection, as that capability was widely applicable and commonly mentioned as difficult. Element selection affects many different functions of using a model, including scaling, extrusion, movement, and testing.

We hypothesized that VR software would grant its users a clearer sense of scale and better three dimensional precision than 2D software, and thus be more intuitive and easier to control. However, the extra arm movement and risk of disorientation using VR could cause discomfort compared to mouse, keyboard, and monitor.

Fig. 1.
figure 1

Our prototype was designed to use the Oculus Rift’s motion-tracked controllers as seen here.

5 VR Prototype User Testing

We created a VR interface in which users could perform basic element selection in order to test element selection tasks on desktop versus virtual reality. We used the Unity game engine and the VR Tool Kit framework to create an Oculus Rift application in which users could select elements of objects. A video introduction of the controls may be viewed at the following link: https://www.youtube.com/watch?v=0fG9dh5uC04 (Fig. 1).

5.1 Prototype Functions

Navigation

  • The grip buttons, on the side of the controllers, are used for navigation.

  • To move, hold one grip; you can push or pull yourself towards objects.

  • To rotate, hold one grip and push the joystick left or right.

  • To grow or shrink, hold both grips and bring the controllers towards or away from one another.

Selection

  • Hover the controller over a valid element, and that element will light up, indicating it’s in focus.

  • Press the trigger to select the item in focus, turning it red.

  • Hold the A or X button to be able to select multiple objects. (Like holding the control key on a keyboard while choosing files.)

  • You can filter what kind of elements you want to select by using the joystick to bring up the Element Type Filter Menu.

    • Pushing it in any direction will bring up a radial menu with element types.

    • By pushing the stick towards one type and releasing it, you will only be able to focus on and select elements of that type.

    • The types are (from the top, counterclockwise): Vertex, Edge, Face, Object (not functional at time of testing), All.

Controller Legend

  • Press the B or Y button to bring up a legend with instructions for these controls.

5.2 Environment

Selectable Objects. We used two selectable objects in our prototype: an outer blue cube, and an inner green cube. As Unity does not have native support for recognizing and differentiating between vertices, edges, and faces in its 3D objects, we created these cubes out of spheres, cylinders, and thin blocks.

One of our design priorities was that users should always be able to see what is currently selected and what they are about to select. We made several design decisions based on this:

  • Selected objects are bright red, which stands out against the blue and green unselected states.

  • The object that will be selected if the user presses the trigger gains a hover state as represented by increased brightness.

  • If there are any objects in between the users head and their controllers, these intervening objects will turn transparent, so the user can always see their controller locations (Fig. 2).

Fig. 2.
figure 2

Left: User has selected an edge (red) and is hovering over a face of the outer blue cube, as indicated by the faces increased brightness. Right: The outer cube’s face turns transparent as the user reaches through it to select a face of the inner green cube. (Color figure online)

Element Type Filter Menu. We designed the element type filter menu so users would have greater control over what types of elements they wanted to select: vertices, edges, faces, objects (not implemented for this prototype), or all types. We reasoned this would be a powerful tool for quickly selecting small elements such as vertices without accidentally selecting larger surrounding elements (Fig. 3).

Fig. 3.
figure 3

Left: User can hold the joystick to pull up an element filter menu that will let them select particular types or all types of elements. Right: The controller always has a status symbol that indicates what elements it can currently select.

Controller Legend. Finally, we added a controller legend so that users would be able to reference which controls on the likely unfamiliar Oculus Touch controller mapped to which function. Lines go from the control descriptions to the buttons and triggers that activate them (Fig. 4).

Fig. 4.
figure 4

The controller legend serves as a quick reminder of functionality for the user.

5.3 Testing Procedure

Users were given a brief overview of our project and a quick explanation of the testing procedure. They were given instructions about what to expect and asked to think out loud, whenever possible. We created a 2D model in SolidEdge that was similar to the 3D VR Prototype. Users were given either the 2D or 3D model first, and then the remaining model, in a randomized order.

For each model, users were asked to perform simple tasks using the inputs available (mouse and keyboard for 2D, VR headset and controllers for 3D) and to describe their thinking and emotional state when possible. In the case of the 3D model, users were given 1–2 min for self-exploration before any task was given to get oriented to the 3D environment. Users were given a maximum of 5 min per task, and if they were struggling we offered a hint which users could accept or reject.

After finishing all the tasks on a given model, we asked users to rate their experience on both models using a Likert Scale in four dimensions:

  1. 1.

    Intuitiveness of completing the tasks (1 = not intuitive, 5 = very intuitive)

  2. 2.

    Ease of performing the controls necessary to complete the tasks (1 = not easy, 5 = very easy)

  3. 3.

    Physical discomfort while completing the tasks (1 = no discomfort, 5 = high discomfort)

  4. 4.

    Mental discomfort while completing the tasks (1 = no discomfort, 5 = high discomfort).

Further retrospective probing was conducted on any interactions they found surprising, useful, pleasant, etc.

We realized midway through the interviews that the scales were confusing; high scores were positive for the first two questions, but high scores were negative for the second two questions. Each time we explained specifically what a given score meant, so the users had no confusion about their answers. In the interest of consistency we did not alter our results in any way. All users volunteered for the testing and were neither compensated nor compelled to participate. Each user interview was video recorded, as was the screen on which the user performed the tasks on both models. Each user signed a consent form agreeing to be recorded and confirming they were not compensated nor compelled for participation.

6 Findings

Table 1 shows the results of the questions we asked users following their completion of each model.

Table 1. Feedback from user testing on 2D and 3D models (n = 9)

Our findings show that on average users found the VR model much more intuitive than the 2D model. This may be biased, as the users we tested the models with generally had less CAD modeling experience than several we interviewed. However, part of our hypothesis was that those with less modeling experience in particular would prefer the intuitiveness of a 3D model. Users also generally found the VR model easier to control, though most users had no trouble using a keyboard and mouse. Most users explained that once they got the hang of the controls in VR, which took about five minutes for most people, they were able to execute the actions very naturally, whereas using a mouse and keyboard they still had to be very precise with their motions, particularly while trying to select individual vertices or edges.

Users did not report much physical discomfort on either model. One user did report some physical discomfort using the VR model, but he explained beforehand that he had used VR headsets before and always experienced some disorientation while using them. Users felt significantly less mental discomfort while using the VR model, and this was evident while watching the users attempt to complete the tasks on both models.

These results confirm our hypotheses that users would find completing tasks easier on the VR model, and that non-technical users would feel more comfortable using the VR model than a traditional CAD model.

7 Analysis and Recommendations

After reviewing our findings, we arrived at three elements that are crucial to a useful and enjoyable VR user experience.

7.1 Tutorial

Given the lack of standardization and general user knowledge of VR user interfaces, a tutorial that acquaints the user with the capacities and intended use of an interface is vital. We expected that it would be obvious that the user needed to touch the cubes directly to select them, but it wasn’t: some users looked for a laser pointer or some other way to select, while some thought that they could do it remotely using the element type filter menu. Once we verbally introduced users to the idea of moving themselves over to the cube and touching it, most of them took to it like fish to water; but before that initial instruction, they floundered.

Familiarization with the hardware (i.e. the controllers) is important as well. Some of our testers had never used any kind of game controller before, and later confessed to us that they “did not know all the buttons were functional.” If the user doesn’t realize a button exists, they won’t experiment with it.

The controller legend was useful to most users as a reference, but many found it overwhelming as a first introduction to the controls. We suggest that functions should be taught one at a time, giving the user the opportunity to practice in between each lesson, and that the user have the ability to refer back to previous lessons at will.

7.2 Environmental Context

In 2D CAD programs, it’s sufficient for the user to manipulate objects on an otherwise featureless grey background; but in virtual reality, the relationship of the user to the environment and the objects around them is incredibly important. Placing our users into a featureless grey plain at best left our users with no frame of reference for their interaction with the cube, and at worst disoriented some of them. Even something as simple as a plane to function as the ground would have helped with this.

The relationship between the user, the environment, and the object(s) in the scene creates further expectations for the user. In our prototype, the cube was the only thing in the environment other than the user; thus, many of our users expected that they would be moving, rotating, and scaling the cube, as in the 2D program. Several users were surprised when they found out they were instead moving themselves. Had we included other selectable objects, or even something like a pedestal for the cube to rest on, we could have corrected for this.

The size of the user as compared to the object(s) in the scene is also important. Different sizes afford different interactions: a user is going to behave differently with something the size of a Rubik’s cube than something the size of a shipping container. This is also an ergonomic consideration - if the user has to use large movements to climb all over an object in order to select elements of it, they will quickly get tired. Thus, the tradeoff between large objects which allow precision, and small objects which can be manipulated ergonomically, is one designers should consider carefully.

7.3 Action Feedback

Our users enjoyed getting feedback about their actions. At its most basic, that meant seeing the virtual controllers move when they moved them in physical space. Most of them expressed relief that they could see the small cube simply by reaching or leaning through the big cube in VR, compared to the complex maneuvers necessary to do so in 2D.

We also received mostly positive feedback about the easy to see selected state, and the visual reminder of what mode the Element Type Filter was in prevented a lot of confusion. The haptic and audio feedback from changing modes on the filter was also valuable. There was some frustration when users didn’t receive enough visual feedback: the increase in brightness that indicated the hover state wasn’t always visible due to ambient lighting conditions.

We believe that visual, audio, and haptic feedback should be available for almost every action the user takes or mode they select. When a user is displaced into virtual reality, they lose feedback that they’ve lived with all their life such as seeing their hands; failing to introduce alternate forms of feedback in a design will inevitably result in confusion and disorientation.

8 Conclusion

To revisit our hypothesis, our usability testing showed that using virtual reality for 3D modeling addresses two important pain points in existing modeling softwares: steep learning curve and difficulty of control. Our results support this claim, as our VR prototype outperformed SolidEdge in intuitiveness, ease of control, and mental frustration.

There are multiple veins of research that deserve to be explored with regards to 3D modeling in VR: more complex object selection; navigation and control refinement; environmental and scale effects; tutorial design; object selection in AR; and element selection using other types of VR devices such as the HTC Vive or phone-powered headsets like Google Daydream, just to name a few.

Applications of VR to element selection extend far beyond the field of engineering. The easy and intuitive interactions with complex objects that VR enables can be utilized in any field that involves 3D objects, and especially those in which these models must be communicated to people without technical training. Architecture clients will be able to quickly understand complex floor plans, reaching into a model’s interiors to select rooms or draw paths. Students will be able to learn subjects such as cell biology like never before, switching between cellular systems on the fly and viewing them from any angle or size. Doctors will be able to communicate clearly to their patients both in-person and remotely, selecting and annotating elements of human models to show what is occurring in their patient’s bodies.

While it’s still early to say exactly how VR interfaces will replace or supplement traditional desktop interfaces as a tool for 3D modeling and communication, our testers commented that VR seemed to be a great medium for viewing and navigating around 3D objects. As software designers become more skilled at creating VR interfaces and users gain experience, we will no doubt see VR taking on increasing importance in a wide variety of fields. We hope that the principles discussed here will contribute to a future that takes full advantage of what VR has to offer.