Keywords

1 Introduction and Motivation

Increasing demands for more innovative products and rising competition lead manufacturing companies to design more flexible and efficient production environments to sustain their competitiveness. As a result of a transformation process, the increasing automatization of production work has reduced the amount of manual work depending on the application domain, while the remaining manual work has become increasingly knowledge intensive (Campatelli et al. 2016). As knowledge has obviously been accepted as an important organizational asset, knowledge management systems– as a class of information systems – have since then greatly promoted the creation, transfer, and application of knowledge in organizational environments (Alavi and Leidner 2001). Though facilitating individual and organizational knowledge work by implementing technologies is per se not a new phenomenon (Stocker et al. 2012), recent developments of digital technologies including mobile technologies, big data analytics, augmented, mixed and virtual reality seem to offer even more promising opportunities to facilitate knowledge-intensive tasks on the shop floor (Hannola et al. 2018).

Terms including ‘Industry 4.0’ or ‘Digitalization’ have become very popular in recent years and have become very successful in drawing the attention of senior decision makers. However, implementing related technologies to support knowledge work is not a management fashion, but can sustainably empower people and daily operations (Leyer et al. 2019). It is a grand challenge of any successful implementation project to improve current and future work practices of employees (Richter et al. 2018), which involves capturing and fully understanding the as-is situation, co-designing a to-be situation with the relevant stakeholders, and then kicking-off an iterative solution design supported by several digital prototypes with a rising degree of maturity. All this requires an integrated, interdisciplinary, participative, and agile approach, which allows identifying, analyzing, and supporting human work practices in a predominantly digital environment. It is hence crucial that digital work designers understand how and why things work before they can provide a digital solution to support work practices (Richter et al. 2018).

Augmented reality (AR) and mixed reality (MR) are two very promising digital technologies capable of facilitating workers in production environments and easing their work practices. Both technologies have a long history, since first concepts of heads-up, see-through, head-mounted displays in manufacturing reach back into the 1990s (Caudell and Mizell 1992). However, recent technological advancements of wearable technology devices including Microsoft HoloLens or Google Glass (Rauschnabel and Ro 2016) have again caught the attention of researchers to evaluate the adoption of AR and MR in industrial worker-centric use cases. These devices may not just offer increased usability, but also increased usefulness to be adopted in industrial use cases.

Within this paper the following research question will be answered: How can current work practices be transformed trough implementing AR and MR technologies finally enabling future digital workplaces? To answer this research question, the authors first performed a literature review on AR and MR in general, and on AR and MR in production environments in particular. Second, – drawing from own experiences – the authors describe use cases of AR/MR technologies in two different production-related scenarios, discuss their results and summarize their lessons learned.

Next the authors review the scientific state of the art on AR and MR in production environments.

2 A Brief Review of the Literature

2.1 Augmented, Mixed and Virtual Reality

Although Augmented, Mixed and Virtual Reality like to be thrown into the same pot, these are three completely different approaches. Thereby the differentiation between the real and the virtual environment as two poles lying at opposite ends of a Reality-Virtuality Continuum plays a major role for better understanding the different concepts, as outlined by Milgram and Colquoun (1999) and shown in the figure below. According to Milgram and Kishino (1994), MR is a subset of (VR) related technologies that involve the merging of real and virtual worlds somewhere along the “virtuality continuum” which connects completely real environments to completely virtual ones (Fig. 1).

Fig. 1.
figure 1

Reality-Virtuality Continuum (Milgram and Colquoun 1999)

An early survey and one of the most cited papers on AR and its specific characteristics is provided by Azuma (1997) who defines AR as a variation of VR: While the user is fully immersed in VR and cannot see the real world around him, AR allows the user to see the real world with virtual objects superimposed upon or composited with the real world. Medical, manufacturing and repair, annotation and visualization, robot path planning, entertainment, and military air crafts are presented as applications in Azuma’s paper. Azuma et al. (2001) published an updated survey paper considering the rapid technical advancements in the field of AR. In their review update they define an AR system as one that combines real and virtual objects in a real environment, runs interactively, and in real time; and registers (aligns) real and virtual objects with each other.

Another comprehensive survey of AR technologies and applications is provided by van Krevelen and Poelman (2010). According to them personal information systems, industrial and military applications, medical applications, entertainment, collaboration, and education and training are prominent applications domains for AR. A further survey on research and development in the field of AR is provided by Billinghurst et al. (2015)Footnote 1. Education, architecture, and marketing are listed by them as prominent examples of typical modern-day applications. A more recent survey of mobile and wireless technologies for AR systems used for augmented ubiquitous computing is provided by Papagiannakis et al. (2008), covering application areas for mobile AR like virtual characters, cultural heritage, navigation and pathfinding, edutainment and games, collaborative assembly and construction, and maintenance and inspection.

VR, AR, and MR offer various potentials for innovative applications also besides digitizing manufacturing workplaces. However, manufacturing and related processes including e.g. machine maintenance or factory learning seem to be prominent industrial applications areas for these technologies. Nevertheless, there is still a lot of misunderstanding of the differences between these concepts, which must be clarified. A widely used differentiation from a practitioner’s viewpoint was provided by the software developer Julia TokarevaFootnote 2 and is shown in the figure below. According to her, VR immerses the user in a fully digital environment, while AR just overlays the real world with digital objects. MR additionally anchors digital objects to the real world (Fig. 2).

Fig. 2.
figure 2

VR, AR, and MR (Tokareva)

2.2 A Review of AR/MR in Industrial Use Cases

Following the scientific state-of-the-art on implementing augmented and mixed reality technologies in production(-related) environments, both hold a huge potential to support knowledge-based work. A series of researchers published scientific papers on how to apply AR/MR within industrial manufacturing and maintenance use cases.

Caudell and Mizell (1992) describe the design of a heads-up, see-through head-mounted display for human-involved aircraft manufacturing, as modern aircraft require a huge amount of manual efforts due to small lot sizes of parts and human skills required in many assembly tasks. AR can be for instance used to dynamically mark positions of drill holes inside an aircraft fuselage or to project graphical templates for location and orientation of composite cloth during the layout process. Neumann and Majoros (1998) describe an implementation for a maintenance scenario of a transport aircraft using AR technology, showing features that make AR attractive for manufacturing and maintenance. Friedrich (2002) outlines the potential for process and quality improvements for development scenarios (e.g. comparison of test and calculation of a crash rest, ergonomic layout design and flow visualization of pilot and passenger seats, and design and layout of cars), production/assembly scenarios (e.g. assembly of fresh-water system in aircraft, manual assembly in small batch production) and service scenarios (e.g. troubleshooting and service on production systems). Doil et al. (2003) describe how AR can be used to improve industrial planning processes, whereby an existing production environment can be augmented with virtual planning objects.

In their comprehensive paper Nee et al. (2012) review AR applications in design and manufacturing including AR collaborative design, robot path planning, plant layout, maintenance, CNC simulation, and assembly using AR tools and techniques. AR can provide manufacturing workers with hands-free access to context sensitive digital checklists to support assembly and quality control tasks in automotive manufacturing, thereby reducing process time and paper consumption (Stocker et al. 2017). Evans et al. (2017) present a prototype of a system for the Microsoft HoloLens to deliver spatially located AR assembly instructions.

Guhl et al. (2017) propose a concept for human-robot interaction using VR and AR on mobile devices including mobile phones and tablets as well as MR devices including the HoloLens, supporting human operators in interacting with and programming robots. Blaga and Levante (2018) present a developed scenario on human-robot collaborations using the HoloLens. Karlsson et al. (2017) introduce a decision support system using simulation and AR (HoloLens) to show a simulation model in 3D for an improved displaying of manufacturing information. Furthermore, AR and MR can support distant learning within a production environment, whenever a learner receives learning content as a video stream or as textual content from an instructor directly at the machine (Spitzer et al. 2018).

Following this review of the academic state-of-the-art, the authors provide two example cases, where augmented and mixed reality technologies will be used to support production-related processes.

3 An Implementation of Augmented and Mixed Reality in Two Production-Related Scenarios

After the literature review, the paper outlines two different scenarios of how digitally augmenting human work in production-related scenarios can improve human knowledge processes.

The first use case is in the additive manufacturing domain and will demonstrate how augmented and mixed reality can be used to support a worker in maintaining a production machine (3D printer). Since a maintenance task often encloses a disassembly and assembly of a machine, this use-case partly overlaps with the second use-case. The second use case is in the automotive domain and will show, how to integrate mixed reality into a real-world construction and production workflow in the special machine design domain. As the second use case is still in its conceptualization phase, no demonstrator can be presented. The Microsoft HoloLens as shown in Fig. 3 is used for both use-cases as a representative of state-of-the-art Smart Glasses.

Fig. 3.
figure 3

Microsoft HoloLens

The Microsoft HoloLens is a fully untethered holographic eyewear. Virtual content is displayed on see-through holographic lenses directly in the field of viewFootnote 3.

3.1 Maintaining a Production Machine with AR

Maintaining a production machine is a big challenge especially for inexperienced workers. Usually, there is a machine manual, instruction photos or videos available but it is still very difficult to gather all necessary information to be able to repair the machine. Photos or videos have the big disadvantage that the viewing angle is fixed by the creator of the material, sometimes it is necessary to see the maintenance instruction media from another viewing angle. Additionally, the creator of the instruction material often is a very experienced employee, therefore it is very difficult for him to put himself in the shoes of an inexperienced employee to provide suitable learning material. Written instructions occasionally cause ambiguity when shared across different countries, cultures and languages, especially when the maintenance process is very challenging, for example a maintenance process to clean the lens of an industrial laser cutting machine (Spitzer et al. 2017). Therefore, we implemented 3D maintenance animations augmented over the real machine. The advantage of this approach is that the viewing angle is not fixed and can be adjusted by walking around the machine while wearing the Microsoft HoloLens.

We have a 3D printer lab in our research center which is used to print prototype car parts. Several departments of our research center are using the 3D printer. Figure 4 shows the Virtual Vehicle 3D printer lab.

Fig. 4.
figure 4

3D printer lab at Virtual Vehicle Research Center

The 3D printer is under heavy usage at our research center. Therefore, it is necessary to perform maintenance procedures. The glass plate has to be readjusted and cleaned, printer material has to be changed, the 3D printer nozzle has to be cleaned and many other maintenance tasks. We have some experts in the department, but when they are not available, inexperienced users have to perform the maintenance procedures. This situation is comparable to a real-world industry scenario. The expert can be ill, on vacation or on a business trip, in which case an inexperienced colleague must step in to perform the maintenance procedures. Figure 5 shows a capture of the field of view of a person who is wearing the HoloLens while performing the glass plate cleaning. The HoloLens UI can be placed in the room individually. The UI shows short text descriptions of the maintenance steps. The animation is augmented directly onto the 3D printer.

Fig. 5.
figure 5

3D printer without HoloLens (left), 3D printer through HoloLens (right)

3.2 Automated Assembly Manual with AR

The second use-case is in the special engineering domain. The challenge in special engineering domain is that the production of such machines/products has very small lot sizes. Additionally, some machines only differ very slightly, hence, it is a challenge for the worker to assemble the right parts for a certain product. It is also a big challenge to communicate slight changes in the engineering CAD to the production worker. We will implement a software artifact to highlight such changes and to visualize the changes for the worker to minimize the error rate during assembly. Therefore, we develop an automated assembly manual with AR which automatically adapts to changes of parts and assembly groups.

There are a lot of people from engineering and production involved to create a new machine or product in general. The next section describes the different roles for this process. In some cases, one person could occupy multiple roles, depending on the company structure or on the complexity of the product. Because of non-disclosure of product development data (CADs, engineering documents) of our industry partners, we decided to abstract the use case to a Lego® Technic assembly which could be then mapped to the real industry use-case with adequate effort. This approach was already used in other projects (Spitzer and Ebner 2017).

Involved Roles in Special Engineering.

This section describes the roles involved in technical aspects of product development. Other roles like business, management and logistics roles are not considered because we are focusing on optimizing the CAD/engineering to production workflow.

CAD Engineers.

The CAD engineers construct the machine in CAD. They are aware of physical boundary conditions which must be considered. Additionally, they constantly do a reality check if the product can be assembled. They select appropriate parts, ideally standardized parts, and appropriate tools. The main goal is to reuse standardized parts and tools already available in enterprise resource planning (ERP) systems. Furthermore, they select and reserve space in the assembly for electronics. Also, technical documentation such as drawings, bill of material, calculations and reports are part of their work. The main challenges of CAD engineers are to use appropriate parts and design a product which can be assembled.

Assembly Manual Editors.

The assembly manual editors translate the technical documentation of the CAD engineers to assembly instructions targeted at shop floor workers. They define the assembly sequence by mapping the engineering bill of materials (EBOM) to the manufacturing bill of materials (MBOM). Additionally, they select the appropriate manual type as for example printed CADs with some annotations, image or video-based instructions or 3D animations. The main challenges are to create an appropriate manual for a product which will be assembled for the first time. This usually is a very time-consuming process e.g. creation of 3D animated manuals. Very often, the used software does not fit for all kinds of manual types.

Assembly Planning Engineer.

The assembly planning engineers ensure that the product can be assembled by the shop floor worker. They are responsible for error prevention, plausibility checks, collision detection and test. They validate the CAD engineers’ tasks. Additionally, they estimate costs and time efforts of the assembly process. Also, the decision of whether the product will be assembled manually, partly automated or fully automated is on their head. They assemble the prototype first to identify assembly issues and trigger changes of the assembly manual if it is necessary. After assembly, they check if all flexible parts such as cables or tubes fit appropriately. The last step is then to identify the necessary skill level of the shop floor workers to be able to assemble the product. The main challenge of the assembly planning engineers is to identify all issues with the assembly manual and product assembly issues and give feedback to the assembly manual editors and CAD engineers.

Shop Floor Worker.

The shop floor workers prepare the workplace, tools and parts. Their main task is to assemble the product. The main challenge is to identify revision changes and manual updates when the special engineering product is changed slightly.

As-Is Situation.

The CAD engineer designs the product in CAD, the assembly manual editor creates the assembly manual using several tools as Microsoft Word, Visio, PowerPoint, several image editing applications and CAD tools. The assembly planning engineer validates the assembly manual and the engineering data and gives feedback to the CAD engineer and the assembly manual editor. If changes are necessary, the CAD and the assembly manual have to be adapted. The creation and adaption of the assembly manual is very time-consuming. The assembly planning engineer then communicates the work order to the shop floor workers. Figure 6 shows the process in detail.

Fig. 6.
figure 6

As-is situation

To-Be Situation.

The assembly manual should adapt to changes in the engineering data automatically. This will save a lot of time especially in the special engineering domain. The assembly planning engineer only has to communicate with the CAD engineers and the update of the assembly manual is triggered automatically when the CAD engineers change engineering data. Figure 7 shows the process in detail.

Fig. 7.
figure 7

To-be situation

Main Challenges.

The main challenge is to communicate small changes in engineering data to the shop floor worker. A lot of manual tasks must be performed to ensure data consistency. Such tasks should be automated as far as possible.

4 Conclusion, Discussion and Outlook

Following a review of the state of the art on AR in industrial use cases, the authors have provided insights into two different use cases, how human work can be digitally augmented to facilitate knowledge-intensive production tasks, maintaining a production machine with AR and automated assembly manual with AR. Thereby involved roles, the as-is situation (and challenge) as well as the envisaged to-be situation have been outlined, which is a useful practice when implementing smart factory technologies (Rosenberger and Stocker 2017).

Adopting AR/MR in production environments requires a change from the current as-is situation to the envisaged to be situation. In such a challenging change process, current work practices must be turned into digitally empowered work practices, while developed digital artefacts must support and transform current work places to enable digital work places in the best possible way (Richter et al. 2018). Such projects have a social perspective as information systems are treated as socio-technical systems and these initiatives must deliver (expected) benefits (Luna-Reyes et al. 2005). Users must adapt to new work settings and adopt new working practices. Active change management is a crucial factor for project success.

Implementing augmented and mixed reality in industrial use cases can have both theoretical and practical implications. For instance, digitally augmenting human work can enable context-aware access to process-relevant information and knowledge at the shop floor and allows cutting service and/or production times, while at the same time increasing product and process quality as empowered employees can make better-informed decisions. Workers making informed decisions may reduce their level of frustration, increase their job satisfaction (Schafler et al. 2018), and support retaining a productive flow of work. Context-relevant information displayed in the line of sight without media breaks, and seamless interaction across different IT tools becomes crucial for smooth operation and avoidance of cognitive overload. This will obviously generate not only a technical but also a social impact in factories.