The entry of computers to utilize digital tools and technologies in the design process has enabled an ever faster rate for developing products and services. It gives the ability for many engineers and other actors to work in parallel and share/replicate/combine their results across an infinite number of recipients with little added effort. Additions and changes to the design can be added without the need for any physical remake or rebuild of the objects. Thus, a development process can easily be shared between many actors and engineers in order to gain feedback and improvement suggestions. As the technology has been refined, more and more of the development and planning work can be conducted without the existence of any physical prototype. This reduces the need for multiple time consuming iterations of prototype building for verification and validation. This section serves as an introduction to VR, digital models, and 3D imaging in the upgrade design process.
2.1 Virtual Reality
Most commonly known as virtual reality (VR), the technology is sometimes also referred to as telepresence (Steuer 1992). The use of presence in the wording alludes to the experience of being present in a virtual environment. In other words, the mind is perceiving another surrounding and setting than the actual physical environment that surrounds the body. Steuer phrases the following definition:
A “virtual reality” is defined as a real or simulated environment in which a perceiver experiences telepresence
VR Definition, Steuer (1992)
Steuer presents a framework of dimensions to appraise the quality of a given VR technology. These dimensions are Vividness and Interactivity. Vividness signifies the breadth of the VR medium, e.g. how many senses that are exposed to stimuli, it also encompasses the depth of the stimuli, meaning the level of detail. Interactivity denotes the user’s possibility to navigate or affect the VR environment as well as how realistic that interaction is in terms of responsiveness and accuracy of movements (Steuer 1992).
In general, the term virtual reality refers to an immersive, interactive experience generated by a computer.
VR Definition, Pimentel and Texeira (1993)
Many authors have tried to characterize and measure VR-technologies in terms of quality of the experience. It is however an evasive quality and hard to measure in a quantifiable way. Gibson for example, who predates Steuer (1992) also talks of presence as the measure (Gibson 1979). In present terminology the word immersion is often used to describe the quality of the VR system. Immersion denotes the quality of the sensory stimuli that the system can produce. It is related, although not directly, to the subjective feeling of “presence” of the user. And logically the greater the quality of the stimuli the higher the probability of achieving a high level of presences. Though as many researchers in the field note, presence is highly dependent on the individual and some individuals have a greater capacity to experience presence. Presence can be interpreted as a measure of the extent the user forgets the medium to the benefit of the experience of “being” in the virtual environment (Loomis 1992).
Other examples are Loeffler and Anderson (1994) who defines VR as “a 3D virtual environment that is rendered in real time and controlled by the users”. Similarly to Steuer (1992) framework, they include the concepts of vividness (rendering) and interactivity (control). Although it seems to be narrower in the sense that is only alludes to visual stimuli, rendering.
There have been attempts at quantifying both immersion and presence. Pausch attempted to quantify the level of immersion in VR (Pausch et al. 1997). Meehan et al. (2002) wrote about physiological measurements of the VR experience by invoking stress on the subjects to grasp the fleeing aspect of presence. The measurements extended to heart rate, skin conductance, and skin temperature to determine the reaction of the test subject and compare to the change in the same measures given a real situation. The logic being that if our reactions to a situation in the virtual environment mimics our reaction to the same situation in the real world, our mind and bodies are likely believing the experience. The topic is debated from a different standpoint by Bowman, who poses the question of how much immersion is enough (Bowman and McMahan 2007)? This is indeed an interesting aspect when the purpose is to facilitate work tasks in industry. Then the immersion lacks value in and off itself, as opposed to VR for entertainment purposes where elevated immersion is sought fiercely. Teyseyre and Campo (2009) represent one attempt at identifying the strengths and weaknesses of 3D visualisation in general. Their findings are shown in Table 1.
A general motivation to start using VR is the limitation of what information that can be presented by traditional 2D models (Smith and Heim 1999). The same authors argue that VR makes it possible to make accurate and rapid decisions through the added understanding an immersive virtual environment gives (Smith and Heim 1999). Another strong driver for using VR technology compared to traditional visualization of 3D models is the increased spatial understanding that is achieved in a VR environment. This helps experts in domains outside of 3D modelling and CAD to reach the same, or close to the same, understanding of the models as the model developer.
2.2 Virtual Reality in the Adaptation Process
Systems are designed to fulfil some function or need for its users. Inevitably, the needs or functions will be altered over time and to keep fulfilling these the system has to adapt accordingly. This adaptation can be achieved either by improving the system’s current functions or by adding new functionality to the system. When designing and implementing adaptions to existing systems it is desirable to plan and foresee any problems that might arise. This is performed to ensure good quality and reduce the implementation time to minimize the downtime of the system during the adaptation process (Groover 2007).
Being able to access models through VR access to models through VR for better understanding. Access to models from various places. Many companies are operating on a global scale and need to be able to align and synchronize their efforts in a good and efficient way. This paper is concerned with upgrades and changes to long life assets. And specifically how to plan and optimize these upgrades in a collaborative way. Making use of the many various skills and expertise that exists in a company. In a sense, all the perceivable actors that interact with the IPSS should contribute their aspects and needs. This will support a holistic approach to the upgrade and reduces the risk of costly oversights of some critical functions and or aspects.
The idea of utilizing VR to support engineering work in general has been around for a long time. Deitz wrote in 1995 about the state of VR as a mechanical engineering tool. Concluding that it has the potential to “reduce the number of prototypes and engineering change orders”, “simplify design reviews”, and “make it easier for non-engineers to contribute to the design process” (Deitz 1995). High investment assets in nature tend to have many users and actors, many of them non-engineers, which interact with it over time. Often there are non-engineers that hold valuable tacit knowledge about the operational phase and maintenance of the asset. Enabling these individuals to be a part of the upgrade process can potentially bring about a more optimal end result that considers more aspects than a pure engineering solution would have.
This section goes into detail about VR, how it can be indexed and described and also gives an example of the various technological solutions that exist today. Further it introduces the field of 3D imaging as a technology to provide accurate digital 3D surface representations of the already existing assets. Discussing how these can be used in the ideation and design phase for an upgrade.
2.3 VR Technologies Related to Adaptation of Manufacturing Processes
For the purpose of the research presented in this project the focus has been on 3D environments for planning and evaluation of upcoming changes and updates of high investment assets. For this purpose, only a limited range of the field of VR have been considered and investigated. The aspects which have been included are visual stimuli, movements/locomotion in the environment and to some extent the ability to interact with modelled objects inside the virtual environment. For the extent of the implementation VR is defined as a 3D environment, rendered in real time over which the user has some ability to navigate around in and interact with. Apart from the addition in italics, this is much like the VR definition given by Loeffler and Anderson in 1994 (Loeffler and Anderson 1994).
When applying this scope to the field of VR there are a number of technologies to choose from. A number of them will be presented here. The selection is based on the purpose of using VR which is to give users a feeling of being inside the virtual environment, using some sort of display to visualise the 3D virtual environment (Korves and Loftus 1999).
Menck et al. lists general technologies used to create VR interfaces (Menck et al. 2012): computer display, head-mounted display (HMD), power wall, and cave automatic virtual environment (CAVE).
The above technologies are different on a number of factors, they present different inherent capabilities and their cost is also varying significantly, which can steer or limit the choice depending on application. From a capability perspective many aspects can be identified. For example; multi-user functionality, stereoscopic, real world blending or strictly virtual, passive or (inter-)active, and representing the user’s (or users’) body to name a few. These capabilities will have an effect on the level of immersion, or presence, that the users experience, as well as on their ability to conduct meaningful tasks in the virtual environment.
Computer displays are the most basic and least costly technology to interface the VE, movement is controlled using i.e. a 3D manipulator or even a regular computer mouse (Menck et al. 2012). Many users can be present at the same screen but all of them will share the same viewpoint and in that sense be passengers to the main user, who controls the navigation.
Head Mounted Displays (HMDs) have been available for a long time, but only recently have they developed to a level that can be said to trick the human sense well enough for an immersive experience. The HMD is worn over the head of the user and shuts out any external visual stimuli (Duarte Filho et al. 2010). Therefore the users is not inherently able to experience his or her body. There are ways of recording and rendering the users body and posture back into the virtual environment in real time, examples of this is using VR-gloves or 3D imaging sensors to map the user’s movements (Korves and Loftus 2000; Mohler et al. 2010). If such a mapping is performed, this solution can support multi-user environments through rendering the mapped body and postures or an avatar representation of them back into the virtual environment (Beck et al. 2013; Mohler et al. 2010). Recent technological development has significantly decreased the cost of HMDs, compared to when the cited work was written. In Chapter “Sustainable Furniture That Grows with End-Users” of this publication, Berglund et al. state that the industrial partner views HMDs as a scalable solution based on the price point.
Power walls is an umbrella term for large scale back projected displays. Traditionally they are limited to one point of view in the same ways as a computer screen, although there are recent examples where this limitation is overcome through a combination of DLP projectors and shutter glasses (Kulik et al. 2011). The size of the power walls make them suitable for team collaboration, and allow for both active participants and passive spectators in a larger forum (Waurzyniak 2002).
CAVEs are room environments, encapsulated by screens on all (or at least three) sides. The user stands in-between the walls and the virtual environment is projected around him or her. Tracking equipment is used to manipulate the environment to constantly match the user’s viewpoint (Duarte Filho et al. 2010).
With the many available solutions, choosing the appropriate one can be a challenging task. Mohler et al. (2010) stresses the importance of body representation in VR environments and shows that it significantly improves the users’ ability to accurately judge scale and distance. Kulik et al. (2011) focus on the importance of multi-user support in VR, and even state that it isn’t VR if it isn’t multi-user. Figure 1 depicts an abstraction of the main components of a VR system, incorporating 3D imaging data.
2.4 3D Imaging Introduction
Capturing spatial data can be done in a number of ways, utilizing a wide variety of technologies. These technologies are often categorised into tactile and non-tactile (Varady et al. 1997). The tactile technologies require physical contact with the measurand, while the non-tactile rely on some non-matter media for its interaction with the measurand. While tactile technologies are often characterized by high precision they also risk influencing the measured object during the measurement process. The inherent requirement of movement tends to result in comparably low data capture speeds and a limitation on the maximum measurement area. These drawbacks can create difficulties if the measurand has a soft or yielding surface, or is above a certain size (Varady et al. 1997). An industrially proven and frequently used type of tactile sensor is the Coordinate Measurement Machine, CMM. CMM machines rely on linear movement axes which provide three degrees of freedom coupled with a three degrees of freedom probe unit. The CMM machines are programmable and can be used as an integrated resource in a production facilities to conduct in-line automated measurement of products.
Non-tactile technologies exist in a number of forms, a common classification is active and passive non-contact sensors. Passive sensors make use of the existing background signals of the environment, such as light or noise. Active sensors emit some signal into the environment as uses the returned light to map the surroundings. 3D imaging describes the field of capturing spatial data from the real world and making it available in a digital form. It exists on a wide range of scales and for different purposes. The digital spatial data can be stored for future reference, or be processed in order to perform analysis for some specific purpose. The ASTM Subcommittee E57.01 on Terminology for 3D Imaging Systems defines 3D imaging systems as (ASTM 2011):
A non-contact measurement instrument used to produce a 3D representation (e.g., point cloud) of an object or a site.
The term point cloud in the definition deserves a closer explanation. It comes from the descriptive of the contents of the data set which results from a 3D imaging procedure. The data is recorded as coordinates in space, points. The cloud word can be traced to the fact that these coordinate points are unstructured (however, it can be argued that their sampling pattern is directly a function of the operational parameters of the 3D imaging technology). The cloud can also be said to relate to the lack of any semantic information. The point cloud generated from a measurement holds no explicit concept of objects or relationships between points. These may of course be generated or extracted using various techniques in a post processing or analysis operation.
There exists a multitude of measurement instruments for 3D imaging. Several surveys of the field exists to classify and describe available technologies for 3D imaging (Besl 1988; Beraldin et al. 2007). Figure 2 presents one such classification.
Since the publication of the work which Fig. 2 is based on the circles have widened considerably. An example is photogrammetry which now is capable of capturing the surface geometry of very complex and feature rich objects.
3D imaging is a technology used in many different fields. Some examples are given in Fig. 3a–d. The chosen technology is relate to both scale of the objects and data requirements connected to the intended use of the data.
Figure 3a. Product scan: 3D imaging is used in product development to digitalize for example clay models of product designs. It is also used in production to validate process output, e.g. shape conformance of the physical product to the designed tolerances (Yao 2005; Druve 2016)
Figure 3b. 3D Scanning of a building: Building Information Model (BIM) is an Area within facilities management that has adopted 3D imaging. For one, to map the existing facility more accurately, and for the other to improve visualization quality and real world likeness.
Figure 3c. 3D imaging of Cultural heritage: For cultural heritage preservation and archaeology 3D imaging has made a significant impact in the last decade, by digitalizing artefacts in a museum or entire structures or archaeologic dig out sites they can be share among researchers or the public at a global scale. Archaeology students from anywhere in the world can access a digital version of the Cheops pyramid or the Incan temples of Machu Pichu (Pieraccini et al. 2001; Sansoni et al. 2009).
Figure 3d. Pipe fitting to 3D imaging data: The use of reverse engineering of for example pipes is used frequently in process industry. Typically it provides current state in-data for installing new pipes and retrofitting old pipes (Olofsson et al. 2013).
2.4.1 3D Laser Scanning the Adaptation Process
3D Laser Scanning or Laser Detection and Ranging (LADAR) is a non-contact measurement technology for the capture of spatial data. The technology was developed within the field of surveying as a tool to map terrain as well as to control and monitor the status of construction jobs. Today it is used in a variety of fields, such as building and construction, tunnel and road surveying, robot cell verification, layout planning and Forensics (Slob and Hack 2004; Sansoni et al. 2009).
When capturing spatial data with a 3D scanner it is placed within the environment of interest; this could be an existing production system or a brown field factory floor. A laser pulse or beam is emitted around the environment and its reflection is logged as time of flight or phase shift. Today’s scanners are able to map their entire field of view up to eighty meters away in a matter of minutes with a positional accuracy of a few millimetres (FARO 2012). The resulting data is often referred to as a point cloud, a set of coordinates in 3D space, typically numbering in the tens of millions. The latest 3D scanners are equipped with RGB sensors to add colour information to the coordinates to further improve visualization.
As this technology matures and the tools and methods to capture data become more readily available there is also a steadily growing range of software tools to support its usage (Bi and Wang 2010). These tools are either specialized to visualize and edit point cloud data sets or they are extensions of traditional CAD and simulation tools able to integrate point cloud data. The integration into existing tools enables hybrid modelling environments where CAD and point cloud data are used in parallel. Using hybrid models, CAD models of new machine equipment or products in design stage are put into existing scanned production facilities for planning verification.
Some challenges with this new technology are the size of the data and issues with interoperability between vendor-specific data formats. However, several research efforts strive to automate translation of point cloud data into CAD surfaces to reduce data size (Bosche and Haas 2008; Huang et al. 2009). And new optimized software for visualization of this data format is being developed (Rusu and Cousins 2011). Ongoing standards activities are developing neutral processing algorithms and data formats to ensure repeatability, traceability and interoperability when working with point cloud data (ASTM 2011).
Figure 4 gives an insight to the nature of 3D laser-scanning data by zooming further in on the model until the individual measurement points are distinguishable. The measurement points are singular positions plotted in a 3D space, thus the software visualising them gives them an arbitrary pixel size.