Advertisement

Cross reality to enhance worker cognition in industrial assembly operations

  • Bruno SimõesEmail author
  • Raffaele De Amicis
  • Iñigo Barandiaran
  • Jorge Posada
Open Access
ORIGINAL ARTICLE

Abstract

In this paper, we discuss how Cross Reality (XR) and the Industrial Internet of Things (IIoT) can support assembly tasks in hybrid human-machine manufacturing lines. We describe a Cross Reality system, designed to improve efficiency and ergonomics in industrial environments that require manual assembly operations. Our objective is to reduce the high costs of authoring assembly manuals and to improve the process of skills transfer, in particular, in assembly tasks that include workers with disabilities. The automation of short-lived assembly tasks, i.e., manufacturing of limited batches of customized products, does not yield significant returns considering the automation effort necessary and the production time frame. In the design of our XR system, we discuss how aspects of content creation can be automated for short-lived tasks and how seamless interoperability between devices facilitates skills transfer in human-machine hybrid environments.

Keywords

Cross Reality Industry 4.0 Short-lived assembly tasks Human augmentation Flexible systems 

1 Introduction

Manufacturing assembly lines are subject to continuous fluctuation in production demands such as customization and quantity [15]. While many continuous and repetitive assembly tasks can be automated to improve production efficiency, the introduction of new product variants to the production line consistently poses a major challenge to automation. Additionally, it is necessary to guarantee workers’ well-beings in newly automated industrial environments, which have evolved from mechanization to cognitive, and even emotional interactions.

Humans are part of every engineered system and can increase competitiveness when integrated in manufacturing processes where the customization level and volume of production change in relatively short intervals. This is because humans can adapt to new manufacturing operations without disrupting the production environment. Consequently, overall human-machine performance in industrial systems is a fundamental engineering concern for processes that are characterized by a significant amount of manual human work. In such systems, human factors (including physical, mental, psychosocial, and perceptual factors) can determine the worker’s performance to some extent [23]. When human-machine system designs neglect specific features and qualities of human workers and presume their performance to be constant throughout the time, they can have a negative impact on worker’s well-being or overestimate their performance [21]. Human factors such as repetitiveness of tasks, handling of heavy loads, fatigue, and static, awkward postures expose workers to ergonomic risks that adversely affect their performance.

Full robotic automation can be found in manufacturing, such as the automotive industry, where robots are designed for press tending, car body assembly, painting, and to a large extent, assembly car engines [10, 16]. The value of automation is not just about reducing labor costs. It is about integrating machines that are more reliable than human workers in tasks that require speed, heavy weight lifting, work for larger periods without interruption, repetition, and other other human factors that inhibit human performance and safety [10]. At the frontier of manufacturing automation technology, Flexible Assembly Systems (FAS) assemble different models of specific product families with negligible configuration times [53]. However, it is rare for robots to be used for final assemblies. One major challenge in undertaking industrial robotics is the design of more economically feasible solutions to handle complex assembly tasks, product geometry variability, and intuitive and interactive means of dealing with process and geometric tolerances [10].

Machines can impair workers’ performances and well-being when automation is poorly designed, and the fact that human workers are more prone to errors than machines also poses a great challenge in the design of new assembly lines and automation processes. The unpredictable and erratic behavior of human workers can limit the optimal operation of robots. Facilitating communication between workers and machines can mitigate the risk to the capabilities of both parties. A limitation of assembly automation is that the flexibility of the system is restrained by the automation design for existing product families. Consequently, manual work cannot be efficiently combined with automated systems which are constantly replaced, often forcing parts of the assembly line to be exclusively manual or automated. All these limitations emphasize the gap in the research literature concerning the integration of human factors engineering into industrial system design, as well as the need to investigate how industrial technologies affect human operators [23, 24, 49, 50, 58].

Removing the human factor can decrease the complexity of industrial solutions. However, maintaining the presence of human workers in the assembly chain can create a competitive advantage because humans can rely on their natural senses to form complex and intuitive, yet instant, solutions whereas robots require reprogramming to address new product families and manufacturing problems. Manufacturing industries have found that augmented reality (AR), virtual reality (VR), and mixed reality (MR) designs and technologies have the potential to empower workers, actively and passively, to perceive and learn information about new assembly processes in a timely manner, without altering their established work routines. Hence, the potential to reduce the cognitive load required for a worker to learn assembly procedures for new product families.

VR immerses workers in a fully artificial digital environment while AR overlays virtual objects on the real-world environment [39]. MR is a system in which a worker is immersed in a digitized environment or interpolates between digitized and physical ones, the nature of the experience widely varying depending on the context [36, 37]. Furthermore, Cross Reality (XR) is defined as specific type of MR system wherein the worker interacts with the system in order to change the physical environment [4].

The heart of current research involves the detailed dissection of these technologies, their relationship to each other, and their unique abilities to augment worker’s cognition and to facilitate skills transfer in industrial assembly environments. We have also investigated how industrial sensors and robots can collaborate and support the transfer of skills to human workers in human-machine hybrid environments. Our objective is to empower human workers with means to augment their production environments with information that adapts to new product families and facilitates the acquisition of new assembly skills. In our case study, we investigate the impact of our system in assembly tasks performed by workers with disabilities, where the customization level and volume of the production change in relatively short intervals.

The remainder of this manuscript is structured as follows. The next section reviews previous studies of augmented reality applied to the field of Industrial Design and Manufacturing. Section 3 describes the research fundamentals needed to define the solution described in Section 4. The technological setup is described in Section 5, and the remaining sections summarize the results and discuss possible future work.

2 Literature review

The use of AR, VR, and MR to enhance natural environments and human cognition has been an active research topic for decades. Along the way, industrial augmented reality (IAR) emerged as a research line focused on how these technologies can support human workers in manufacturing processes. The use of IAR can be traced back to the seminal work of Thomas Caudell and David Mizell at Boeing [12] in 1992 and to the contributions of Navab [41] and Fite-Georgel [18].

With AR, VR, and MR technologies becoming increasingly robust and affordable, new use cases and applications are being explored. Kollatsch et al. [34] developed a prototype for the visualization of information from control systems (e.g., PLC, CNC) directly on-site. Simões et al. [60] proposed a middleware to create tangible in-site visualizations and interactions with industrial assets. Gauglitz et al. [20] proposed a software for a tablet that augments airplane cockpits with AR instructions. Henderson et al. [26] investigated the use of HMDs in maintenance tasks for military vehicles through projected instructions and guidance. Other authors took augmentation a step further and introduced systems for industrial teletraining and maintenance featuring augmented reality and telecollaboration [40, 66]. In addition to augmenting the physical environment, these solutions enable experts to remotely collaborate and exchange knowledge at reduced costs.

Empirical data on the use of IAR in real-world environments is increasingly available and presents well grounded insights into the efficiency of such systems. However, there are a number of challenging factors of IAR development that need further research. Some challenges are more transversal and related to the necessary interdisciplinarity knowledge in areas such as computer graphics, artificial intelligence, object recognition, and human-computer-interaction [28]. For example, user intuitive interfaces still remain a challenge, particularly in situations where the understanding the user’s actions and intentions is required for their adaptation in unexpected conditions. Along this line, Feiner et al. [17] presented a prototype IAR system that implemented what they defined as Knowledge-based Augmented Reality. The prototype presented relies on a rule-based intelligent backend to create graphics that respond to the communicative intent of the AR system. Feiner and his colleagues represented the communicative intent as series of objectives that define what the generated graphical output is expected to accomplish. The authors demonstrated the prototype in a test case that helps workers with laser printer maintenance. While the proposed work is clever in how it creates the user interface, it is neither adaptive or intelligent from a supervision standpoint.

Other research challenges are specific to assembly tasks. Assembling is the process of putting together several separate components in order to create a functional one [28]. Hence, an assembly task requires locating, manipulating, joining parts, and ensuring the expected quality. One critical task is to guide the worker towards certain pieces and parts. Schwerdtfeger and Klinker [57] compared visualization techniques to give positional and directional guidance to the human assembler. To prevent the drawbacks of AR smart glasses, mainly their limited field-of-view, projection-based approaches have been broadly presented as an alternative. For example, Sand et al. [54] developed a prototype to project instructions into the physical workspace, which enabled the worker not only to find the pieces but also to assemble products without prior knowledge. Rodriguez et al. [52] proposed a similar solution in which instructions were directly overlaid with the real world using projection mapping. Petersen et al. [47] projected video overlays into the environment at the correct position and time using a piecewise homographic transform. By displaying a color overlay of the user’s hands, feedback can be given without occluding task-relevant objects. Other authors described mechanisms to detect and prevent human and machine errors, using computer vision, machine learning, and remote assistance methods [40, 43, 59, 64]. In a parallel line of research, Bortolini et al. [7] investigated the impact of digital technologies in assembly system design and management. Facio et al. [16] described how digital part-feeding policies can improve flexible systems in macro and micro-logistic aspects, and Bortolini et al. [6] applied multi-objective optimization models for work balancing to minimize assembly line takt time and ergonomic risks.

Another major problem facing IAR is the development overhead of AR applications, which requires the creation of content and the design of the worker experience. There are popular software libraries like ARToolKit [31] and Vuforia [32] that can detect objects and depict 3D models in real time. However, their use requires programming skills to develop AR applications. An alternative is the use of AR authoring tools, which were proposed a decade ago [25, 33, 51, 60, 62]. Their main advantage is that they do not built upon cost and time consuming recompilation steps, and consequently the updates to the application are fast and can be completed efficiently. Other authors [14, 40, 60, 63] automated certain aspects of content creation, so assembly instructions can be automatically generated from CAD files, thus reducing the authoring burden. However, fine-tuning existent tools to solve a specific domain problem remains a open challenge.

Competitive advantages of AR and VR in industrial assembly and maintenance have been demonstrated in several studies. Tang et al. [62] have shown that AR-based instructions reduced errors and cognitive load of participants by 82% when compared with paper-based instructions, instructions on a monitor, or even instructions steadily displayed on a HMD. However, they also concluded that occlusions by AR content and presentation of the information over a cluttered background would decrease overall task performance. Boud et al.’s [8] experiments demonstrated that task completion times were longer when using the 2D drawings to train how to assemble water pumps before assembling the real product, in comparison with AR and VR training. Funk et al. [19] deployed a MR system and observed a decrease in performance for expert workers and an increase in skill acquisition for untrained workers. Curtis [13] indicated shortcomings in practicality and acceptability of displaying instructions on HMDs.

Other advantages presented by VR, AR, MR, and XR to assembly scenarios include reduction of data retrieval times [65], improvement of ergonomic behaviors in assembly [22], more comprehensive skill transfers, and reduction in training times [27, 45, 46, 56].

3 Methodology

In this section, we formulate the research problem and a XR solution to augment human cognition in assembly environments. XR technology can mitigate workers’ cognitive limitations and facilitate the acquisition of skills, empowering them with new means to undertake more complex tasks. Throughout this paper, we apply our methodology to the case study of assembling medium-voltage switches. The novel contribution of this work is a flexible and inclusive XR solution to empower assembly workers with new tools to learn and supervise the assembly of products prone to variations in volume and diversity.

3.1 Classic workflow

In the classic workflow, worker disabilities affect their ability to perform their job at three levels: interpretation of the wiring instructions to be executed, performance of conductivity checks on components assembled, and interaction with robots. The classic workflow for the production of medium-voltage switches consists of two steps: Assembly system design by a production manager (see Fig. 1) and interpretation of schematics by shop-floor workers (see Fig. 2).
Fig. 1

Classic workflow for assembly system design

Fig. 2

Classic workflow for the assembly of medium-voltage switches in shop-floors

The system design step is the adaptation of contractors’ data into assembly manuals that are easily comprehended by workers. In this stage, the producer managers identifies the assembly sequence for the components. Then, the person matches the task requirements with the skills in a poll of workers that are available for the job. Then the producer manager designs a new version of the assembly manual that is suitable for all workers chosen for the job. This is an area where both training times and assembly errors can be reduced in the instructional content creation process.

The generation of these assembly manuals for short-lived production series is time consuming and does not scale for a large variety of human disabilities and hardware. This is a limitation that weighs heavily on companies like Lantegi Batuak that employ more than 2000 workers with disabilities at their facilities. Their latest report [35] asserts that 63% of their workers have mental disabilities, 21% have physical disabilities, and 4% have some kind of mental disorder. The remaining 12% do not have any disability diagnosed. Lantegi Batuak is a company with assembly plants for medium-voltage switches and, at the time of writing, their process for adapting schematics to shop-floor works is the one described in the classic workflow.

The medium-voltage switch assembly line is serialized in such a way that several workers take part in completing the assembly of each unit, each of them by performing the same intermediate step of assembly (cable wiring) repeatedly on successive units that advance along the assembly line. The entire line can be duplicated to respond to peaks in production demand by training additional workers.

This assembly process can be generalized to other assembly tasks, which may differ in their levels of human-robot collaboration. In our case study, we want to give emphasis to tasks where robots cannot undertake the entire assembly task, either due to technological limitations or costs.

Robots provide human workers with the components that are required in a well-determined order: The sequence described in the assembly manual. For each part, the worker reads the next instruction, searches for the relevant cables and electric components, and does the wiring (see Fig. 2).

Assembled units are then inspected by a skilled worker responsible for the quality control. It is in the responsibility of this worker to perform a conductivity check for each individual connection. The quality control process is expensive in personnel and production time. Further contributing to this cost, when the worker detects an assembly mistake, it is necessary to disassemble and reassemble the unit. The introduction of mechanisms for visual inspection during the assembly has the potential to decrease average production overheads and times. Unfortunately, production series are limited to a few units and are highly customized to contractors, which makes this difficult and inefficient to automate.

3.2 Generalization and modelization

Assembly tasks present a unique set of challenges for XR, ranging from interaction issues between workers and robots to human factors and production requirements.

Digital augmentation of assembly shop-floors is a two-phase process. In the first phase, which we call the design phase, we define what the task is and how assembly instructions can augment the worker cognitively. In this phase, it is necessary not to inhibit freedom or creativity, or else the design of the hybrid human-machine space will fail to yield any competitive value over traditional automation. The system should adapt the visual context and schedule a new set of actions whenever a worker undertakes an action (e.g., installing a component) that deviates from these instructions without affecting the assembly sequence. In the second phase, which we call the execution phase, we infer functional requirements from the worker’s skills, human-automation interaction preferences, and task context. A worker-driven XR application has to adapt the assembly instructions to different task contexts and worker skills, requiring a significant level of system modularity. Partial worker blindness can, for example, focus the interaction towards aural senses by activating sonorific technologies and adapting content presentation. Another example can be seen in tasks where hands are used either to hold or to perform assembly of specific components when hands are not available for interaction with visual interfaces. The XR system can react to interaction modalities like eye-gaze, voice control, or interaction with projected content.

3.2.1 Modelization of design phase

Let ω(t) be the average time that is needed to optimize the wiring instructions for a shop-floor task t, \(\phi (t, \mathbb {S}) \rightarrow (k, \mathbb {W}) \) the computation that takes k seconds to find a task force \(\mathbb {W} \subseteq \mathbb {S}\) for the task requirements, and \(len(x): \mathbb {W} \rightarrow \mathbb {N}\) the number of disability classes for the subset \(\mathbb {W}\). Let g(t,w) be the time required for adapting the set of instructions t to a specific worker profile w. Then, Eq. 1 determines the time effort required to design an assembly manual for a set of worker profiles \(\mathbb {W}\).

$$ \begin{array}{@{}rcl@{}} ds(t, \mathbb{S}) &=& \omega(t) + k + \sum\limits_{w=1}^{len(\mathbb{W})} g(t, w),\\ (k, \mathbb{W}) &=& \phi(t,\mathbb{S}), \mathbb{W} \subseteq \mathbb{S}, k \in \mathbb{R} \end{array} $$
(1)

In the above formulation, the optimization processes w(t) are considered to be independent from task forces \(\mathbb {W}\). Our research focus aims to enhance human cognition rather then improving these functions, which require dealing with a number of other factors that are outside our research scope, e.g., layout optimization. Hence, it is not our objective to find a global minimum for ω(t) and \(\mathbb {W}\).

3.2.2 Modelization of execution phase

An assembly task t requires by definition the manipulation and joining of parts (cables, in our case study). The task of wiring a cable requires the consideration of a number of factors: understanding of the assembly instruction, grasping the cable, determining the relative positions of the cable and its target component in physical space, transporting the cable towards the component, and inserting the cable accurately. Afterwards, the process is typically followed by quality control procedures.

The interpretation time of an instruction i for a worker w is defined as follows: Let α(w,i) be a function that describes the reading complexity (e.g., average complexity of the symbols in the description of the instruction), β(i) the average vocabulary length of the material that describes the task, γ(w) the interpretation ability of the worker weighted in terms of reading, hearing and tactile skills, and λ(w,wt) the function that quantifies the worker fatigue and stress at a given moment wt. Then, the overall interpretation time of an instruction i can be formulated as \({\Phi }: \mathbb {R}^{4}\rightarrow \mathbb {R}\):

$$ h(w,i) = {\Phi}(\alpha(w, i), \beta(i), \gamma(w), \lambda(w,w_{t})) $$
(2)

The above formulation considers, for example, that for people with dyslexia the understanding of short statements are easier to cope than longer sentences. Moreover, it considers human factors like fatigue and stress that affect the ability to accurately interpret the assembly instructions.

Once the purpose of the assembly instruction is clearly understood, workers have to assemble the respective instruction cable. Let f(w,c,i) be the time needed for a worker w to find a cable ic described in an instruction i, and Λ(w,ic,i) be the average time required for the worker to wire it. Let \(q(t): \mathbb {P} \rightarrow \mathbb {R}^{2}\) to be the average time required to visually inspect the unit, perform conductivity tests, and to detect a possible faulty instruction e. Assume \(d(e) = {{\sum }_{i}^{k}} d_{t}(k)\) to be the average time to disassemble and reassemble the instruction e. Then Eq. 3 describes how different instructions and worker profiles can affect overall productivity.

$$ \begin{array}{@{}rcl@{}} \rho(t,w) &=& \sum\limits_{i=1}^{len(t)} h(w, i) + s(i_{c}) + f(w, i_{c}, i) \\ &&+{\Lambda} (w,c,i) + e_{t} + \sum\limits_{j=e}^{len(t)} d(j), (e_{t},e)\in q(t) \end{array} $$
(3)
Where s(ic) ≥ 0 is the restock time for the component ic.

3.2.3 New modelization

Each operation modeled in the equations above requires a different level of haptic and visual guidance to efficiently enhance the skills of the worker. If multiple cables are to be inserted into a single switch hole, then further cognitive considerations, such as planning, are required. If assembly tasks can have such cognitive activities, then we must demonstrate and observe the effect of unique human-machine interactions and instruction formats in the productivity and learning of assembly tasks.

The technological enabler of the flexibility and customizability enabled by our work is a condition-based rule system. The decision module enables the system to adapt to unforeseen variables during the design phase, e.g., to contemplate interaction behaviors based on different worker profiles and hardware components, without a substantial increase in the complexity of the design process. Therefore, our research work aims to minimize the cost of Eqs. 4 in 1 with the definition of a progressive skill model.

$$ \sum\limits_{w=1}^{len(\mathbb{W})} g(t, w), (k, \mathbb{W}) = \phi(t,\mathbb{S}), \mathbb{W} \subseteq \mathbb{S}, k \in \mathbb{R} $$
(4)

The definition of a progressive skill model enables assembly instructions to be automatically adapted to different workers’ profiles not only in terms of information visualization modality but also in terms of interaction modalities, e.g., inclusion of gamification, superimpose less information whenever workers know a priori how to assemble sub-parts based on experience analysis, hence an optimization of Φ.

The modularity of the proposed system facilitates its integration with external tracking and simulation algorithms, which can provide additional inputs regarding incorrect assembly actions, system failures, and encode events in immersive actions surrounding the worker. A decision engine is utilized to analyze machines and monitor the worker’s activity, translating facts and events into courses of action which can be carried out by the system.

4 Proposed methodology and system

Although collaborative human-robot production cells are an intriguing prospect for companies, the complexity of programming environments that integrate complex variables like human workers and a large number of sensors remains one of the major hurdles preventing flexible automation using hybrid industrial robotics.

The proposed system provides two distinct user workflows: One for content creators and administrators, and one for workers. The content authoring tool empowers users with mechanisms to deliver differentiated content for various devices and worker profiles, as per Section 4.1. The workflow is also optimized to reduce the time required to collect media and present live data to shop-floor workers. The assembly workflow tool enhances the interaction between human workers and machines with information by providing machines with information about workers’ activities and needs, while concurrently augmenting the workers’ abilities to work effectively and efficiently with the machinery, as per Section 4.2. In Section 4.3, we describe how these two workflows integrate with each other.

4.1 Experience authoring

The proposed solution describes a set of mechanisms to automate data ingestion (e.g., electronic schemes, wiring instructions, 3D models of electronic parts). Data ingestion can occur in two distinct phases of the XR application: During application design (commonly using for static datasets) and on-demand during the execution of the application (commonly using data collected directly from sensors).

Figure 3 provides an overview of the data ingestion process. In the proposed implementation, users can chose to ingest data through networked services (e.g., sensors, machine-generated data), capture it with a Microsoft HoloLens, or produce it with mobile applications. Wiring instructions in electronic specification schemes and/or Excel format can be dragged to be automatically translated into XR instructions with a simple web interface. Although workers are not required to follow any specific assembly order, our field tests demonstrated that humans prefer wiring the longest cables first. This preference was considered for the presentation of instructions.
Fig. 3

Design workflow for the assembly of medium-voltage switches in shop-floors

System-generated XR instructions do not contain enough information to accurately locate assembly pieces in the physical environment. Therefore, it is necessary to manually tell the system how parts look like so that can be tracked. It is also necessary to visualize non-spatial content (e.g., task descriptions) in appropriate locations, using devices such as projectors and Microsoft’s HoloLens. To automate part of this process, designers can define working areas for parts with predetermined starting locations. These areas define where components can be found or placed and where interfaces like social feeds, gamified elements, and other kinds of system messages can be manipulated. During an import phase, instructions are automatically mapped (source or destination) to these areas. Afterwards, it is necessary to further map and animate elements to enhance the XR experience. This task can be executed directly within HoloLens by moving imported elements around or within a web browser with the projector perceptive. We have not yet implemented any additional automation module. However, the system is designed to be extended, supporting modules using a variety of tracking and coordinate systems. Since those modules might have their own data formats and coordinate systems, we have also defined data fusion mechanisms to handle the data transformation between modules.

4.2 Worker-personalized interaction

Our system relies on a Rule Management System (RMS) to enforce condition-based decisions that use technology installed on the shop floor to provide workers with an augmented assembly experience that seamlessly spans over disparate interaction devices, namely mobile tablets and Microsoft’s HoloLens. The RMS creates a relation-based representation of live in situ data, and then codifies it into a database that is used for reasoning. Like other RMSs, our system depends on two types of memory: A working memory to hold data facts that describe the domain knowledge, and a production memory to monitor data events represented as conditional statements. The reasoning engine is responsible for monitoring changes in the network and matching present facts with the rules and constraints defined for each problem.

The reasoning engine organizes and controls workflow based on forward and backward reasoning methods. In the first method, the RMS begins from a set of initial facts, which are then used to determine new facts (or sub-flows) until it reaches its final goal. In the later method, the engine starts from its final goal, then it searches for rules that lead to it.

Drools [11] is an example of a Business Rules Management System and also the solution adopted in our system (see Fig. 4). It delivers a solution for creating, managing, deploying, and executing rule-base workflows. Drools is a very accessible Application Programming Interface (API), allowing rules to be easily modified by humans and digital processes alike. One advantage of Drools is its ability to apply hybrid chaining reasoning, which is a mix between the forward and backward chaining of traditional RMSs.
Fig. 4

Business rules management system

The RMS components enable us to deploy a system that is self-configuring and modular. It allows for a seamless integration of intelligent modules while maintaining the value of human elements and, above all, keeping the design process simple. In our test case, workflow rules were defined by humans at the modeling phase. However, through the application of artificial intelligence algorithms, these rules could be updated and/or created dynamically. Furthermore, within a workflow, specific modules might implement their own processes to create rules that help to accomplish subtasks such as path finding and collision detection.

In our implementation (Fig. 5), content creators can define conditions based on what type of devices are connected and/or data events (e.g., sensor value). Conditions might be limited to a set of target devices that handle specific events expected by the RMS. Actions are defined using the authoring API in the same fashion as the authoring of a XR instruction.
Fig. 5

Proposed workflow for assembly tasks in shop-floors

In Section 4.3, we explain how our system integrates different teaching and assistive modalities which are not limited to the visualization of primitive types of multimedia content. The restriction to primitive types of multimedia content inhibits learning because it constrains workers to interactions with sequential mediums of information, whereas the information being taught may not be best represented by such representations. The different teaching and assistive modalities have been encoded in three groups of messages modeled in JavaScript Object Notation (JSON) format [9]: graphical elements, data, and event triggers (or “constraint-based events”).

4.3 Industrial Internet of Things (IIoT) and experience authoring

In order to allow the seamless interaction between machines in the physical environment, we needed to implement a communication bridge for each sensor, PLC, device, and robot on the shop floor. Such a bridge brings connectivity to devices and provides a way to translate device-specific events between machines and the overall system. Communication bridges do not necessarily implement machine control functions. They can be used to implement digital twins for physical objects—even before they are built. A digital twin is a computerized (digital) representation of a physical asset, on top of which data can be visualized and analyzed [42]. This digital representation might include a live and comprehensive description of any given physical object. Such a representation would be useful in simulation models [61].

The implementation of digital twins for different machines enables designers to run simulations that demonstrate integrated systems prior to their deployment and to predict the time required to their installation. When the twinned asset is in physical operation, the digital twin can be used to predict component failure. Hence, once implemented, the concept of digital twin can be used during the entire life cycle of the device.

The digital twin paradigm does not inherently require a visual approach. A digital twin can exist as long as sensors on the physical object capture data about its condition and feed it to other systems via some form of IIoT connection.

In our implementation, data generated by machines is associated with unique identifiers in the the digital twin, e.g., joint of a robot. This approach enables the physical system and its submodules to feed the virtual representation of the physical space with real-time streams of data. In addition to providing information on the state of the device, the digital twin also works as an interaction API, e.g., request the robot to pick an object in a given position by inputting new values for each sensor composing the robot in the digital twin. However, for this interaction methodology to work, the communication bridge must translate sensor values received from the XR system into device-specific commands that would result in sensor updates.

Communication bridges are also fundamental to seamless communication between interactive devices. In the design phase of an application, they enable a number of different devices to be used for the authoring of assembly instructions. One authoring approach now possible using these bridges is the modeling of the XR interaction using both a web editor (import feature) and a MR HMD such as Microsoft’s HoloLens. The designer can use HoloLens to model 3D aspects of the assembly instructions that otherwise would require the use of a CAD system, e.g., to perform spatial operations, which include, for example, place and pick objects in the 3D space. The content creation process can also be supported by mobile tablets, which can help easily integrate 2D multimedia content into a 3D scene. When Microsoft’s HoloLens is paired with traditional technology that is more efficient in content creation, the time to design a process can be significantly reduced and the calibration between digital content and the physical space can be spatially maintained. Both tablets and Microsoft HoloLenses can provide similar authoring functionalities. However, the authoring is more efficient when the usage of both is combined. In our tests, we used tablets to import the schematics and insert additional comments and graphical elements. Traditional 2D point and click metaphors proved to be especially efficient to implement.

5 Evaluation

Our evaluation determines the extent to which our content creation approach demonstrates any promising benefits for short-lived tasks. Since immersive approaches for human-robot interaction are new enough not to have an established baseline, our evaluation focuses on the following goals: (1) to assess the usability of our system, (2) to collect qualitative feedback on the design of the system, and (3) to record how people interact with the remote system via the provided user interface (UI).

Two user studies were performed to meet these goals relating to a digital assembly system design. Two user populations were considered: the production manager (see Fig. 1) and the shop floor worker. The experiments involved a total of 10 participants, one production manager and ten shop floor workers.

5.1 User study 1—design of information sharing

We propose an authoring system that can quickly adapt to different assembly scenarios. However, the flexibility of the system is constrained by the skill of the worker using the system. The questions we seek to answer with this study are: (RQ 1) Can new and infrequent users use and effectively exploit the features of the authoring tool to create spatially augmented assembly stations? (RQ 2) To what extent are users capable to understand and model the interaction for the worker skills?

The hardware consisted of a projector, 3D camera, and tablet (Surface Pro). The authoring interface is HTML-based and directly accessible from the tablet. Each participant (production manager) was assigned a task that entailed the design of a XR tutorial to guide workers in their assembly workstations. The authoring steps consisted of (1) ingesting schematics, (2) collecting any multimedia material necessary to support the operator during the task, (e.g., images and videos), and (3) tuning the system and peripherals to effectively adapt to a set of worker disabilities.

In the evaluation of this case study, we have applied the cognitive walkthrough method validated by [48]. This method is especially appropriate for evaluating system learnability, especially for new and infrequent users.

5.2 User study 2—system usability

Usability is a key factor leading to the success of any interactive system. Thus, it is useful to have an evaluation method that entails reliable measures of usability. Based on the definition of ISO 9241-11 (Ergonomics of human-system interaction — Part 11: Usability: Definitions and concepts), system usability can be measured from three perspectives: Effectiveness (whether the system lets users complete their tasks), efficiency (the extent to which users expend resources in achieving their goals), and satisfaction (the level of comfort users experience in achieving those goals). Tools like the Computer System Usability Questionnaire (CSUQ), Questionnaire for User Interface Satisfaction (QUIS), System Usability Scale (SUS), Post-Study Usability Questionnaire (PSSUQ), and Software Usability Measurement Inventory (SUMI) are commonly used to conform measurements to this definition of usability.

In this user study, the usability (effectiveness, efficiency, and satisfaction) of the system was measured with the SUS questionnaire [2]. SUS consists of ten questions with responses on a 5-point scale from “Strongly disagree” to “Strongly agree.” It is significantly shorter than SUMI and recent psychometric analyses have demonstrated that it also provides reliably measures of perceived “learnability.” Furthermore, SUS can be applied with a wide range of technologies, including those that have not yet been invented [3, 55], e.g., the novelty of immersive system that is proposed.

The experimental setup consisted of a dual-arm collaborative robot, a projector, a HoloLens, a 3D camera, and an analog button to enable the human-system interaction. The prototype was designed to draw worker attention to relevant information about the task (e.g., highlight components, depict symbols to identify error-prone steps, to associate aural notifications to different states of the task), and to provide step-by-step guidance with media (e.g., 3D animations, sounds, colors, videos, and interactive checklists).

The information provided to workers consisted of the following elements: A checklist and warning messages displayed on the HoloLens or projected on designed spaces when the device was offline; visual highlighting of the items to be assembled (augmented with aural messages if workers have vision disabilities); and textual instructions personalized to different visual disabilities in size, color, and content (two instructions versions were defined, one is more detailed than the other).

The robot was responsible for stock-feeding the components to workers as swiftly as possible and for ensuring the correct cable was connected. Workers were provided a visual highlight of the location on the table of the next component to be installed, as well as where on the product the installation was to take place. Information was provided through either the individuals’ HoloLens, aural communications or projection mapping. The task required the worker to wiring 8 different cables and to follow a few assembly conventions for the positioning of the cables. After the task, automated tests provide visual feedback on the assembled unit.

During the experiment we have recorded, with the consent of all workers, their interaction with the system to identify limitations and suggest better requirements for the system. Afterwards, we invited the participants to complete the SUS questionnaire, the results of which results are discussed below.

6 Observations

After the experiments, we calculated the average of the usability values of all participants to obtain the SUS score (Table 1). The mean SUS score is 75.71, the median is 77.50, the maximum is 87.50 and the minimum is 67.50. The overall usability scores fall in good usability score according to [2]. Figure 6 summarizes the results for each question of the questionnaire.
Table 1

Mean and standard deviation for each item in the questionnaire

Question

1

2

3

4

5

6

7

8

9

10

Std. Dev

0.53

1.00

0.82

0.90

0.49

0.79

1.41

1.46

0.79

0.82

Mean

4.57

2.00

4.00

4.14

4.71

1.43

4.00

1.86

4.43

2.00

Fig. 6

Candlestick chart for the SUS scores of each question

In the results summarized in Fig. 6, we observed a positive sentiment towards the usefulness of the system (the average score of 4.57), which led to a willingness among users to continue using the system. The perception of a well-integrated system with their physical space and tasks goals rated as the highest user score (average score of 4.71). Participants also reported that information has easily accessible and consistent with their needs (average score of 1.43). We observed lower scores in features like simplicity (average score of 2.0), easy-to-use (average score of 4.0), and prior knowledge (average score of 2.0). We observed in a post-interview that participants were not familiar with the technology, e.g., cobots, and projected technology. The entire system was a novelty to the workers, nevertheless the overall reaction is that they felt very confident while interacting with the system (average score of 4.43).

An analysis of the responses collected with the cognitive walkthrough method revealed minor issues in the usability of the system that were not critical to the completion of the task (research question 1), e.g., menu labels that can be more intuitive. Participants succeeded in creating a digital manual for the schematics assigned as exercise. In a post-interview, participants described the overall tool as intuitive and flexible. With regard to research question 2, participants manifested difficulty to autonomously redefine the proposed interaction model created by the system for the different skills and disabilities. A few users suggested that it would be better to include interaction guidelines to support their personal views. Hence, we conclude that production managers would need additional help to understand how to customize interaction to different worker skills.

7 Discussion and limitations

In the assembly of medium-voltage switches, technicians must obtain a variety of assembly skills and knowledge to effectively work with many different products. Traditional methods to enhance the skills and abilities of the worker, such as on-the-job training (OJT), cannot fulfill the requirements that are expected in future trends for the assembly sector. A new human augmentation system could assist workers with both with learning and executing of assembly tasks themselves. The purpose of this work was to analyze the use of XR systems as a training and supervising medium for workers with disabilities in industrial settings. Modern shop floors are equipped with abundant numbers of sensors that have the potential not only to help train workers to perform a task, but to train them to perform alongside collaborative robots. Furthermore, the integration of XR with the IIoT has many valuable applications, such as the visualization of information collected and summarized by machines, the easy reprogramming of machines to prioritize given tasks, and the facilitation of the use of cobots.

Unfortunately, industrial environments present a severe challenge to XR applications. Usually, these environments embed multiple network architectures with different levels of visibility and security. Traditional applications might not be well-designed enough to communicate across disparate networks, thus, reducing their interaction potentials to a limited number of sensors and devices. Cloud-based systems can overcome this problem, but they fail to provide real-time interaction due to the latency between networks. Our system facilitates cross-network communication if communication bridges are deployed in each network. However, further tests will be required to validate the solution from a cybersecurity perspective, as this approach to computation is traditionally fraught with vulnerabilities.

We observed that a combination of Microsoft’s HoloLens, projectors, and mobile tablets can yield positive benefits to human safety and human-machine interaction. However, we found a number of issues when this technology is deployed in operational environments. Microsoft’s HoloLens field-of-view and battery hardly meet the requirements of continuous assembly production. Projectors operate on lamps that need more maintenance than flat screen monitors, and have to be replaced periodically. Moreover, participants with cognitive disabilities reported to have no previous experience with technology and required occasional interaction reminders.

These requirements are adversary to the conditions in assembly environments and require proper attention while designing the workspace. In future work, we need to investigate how to maintain the conversation active between the system and workers with cognitive disabilities, e.g., prompting information about the current action and reminding the worker how to interact with the system once the action is completed. Workers with experience often ignored digital instructions, but found the system useful for its alerts about potential assembly errors. Overall, the system has shown promising results in teaching and supervising workers with disabilities that otherwise would require human supervision. Yet there are safety concerns that must be further investigated, e.g., proximity to robots.

8 Conclusions and future work

Recent research has shown that about 60% of all occupations will have at least 30% of their activities automated by 2030 [5]. According to a study conducted by the McKinsey Global Institute (MGI) [38], between 400 million and 800 million individuals worldwide could be displaced by automation and need to find new jobs by 2030, including up to 1/3 of the workforce in the USA and Germany [38]. Consequentially, the current labor market will crash in a fraction of a generation, and job availabilities not requiring proficiency with advanced technology will dwindle [29]. However, for the near future, most jobs entail major requirements that cannot be handled by computers alone [44]. Hence, when we discuss how automation technology should be deployed in workplaces we must also emphasize the importance of augmentation rather than automation. Our contribution goes exactly in this direction. We describe how XR can deliver a live on-the-job skill refinement supporting system and yield greater benefits to human safety and product quality than traditional methods in industrial manufacturing.

Our system is designed to augment, in a visual manner, human workers with the information necessary to undertake specific, frequently changing assembly tasks. Specifically, the system supports workers in the learning process of assembly tasks one at a time and assists them in inspecting and performing them. Additionally, it introduces an immersive approach for the creation of new assembly manuals, greatly streamlining the teaching process. Workers’ disabilities were considered in the definition of how the XR system interacts and information is presented, e.g., slower versus faster animations for mentally impaired workers, personalized color palettes for each worker, simplified and magnified representations for workers with reduced sight, and use of hepatic sensors for extra feedback, etc. in an attempt to maximize worker’s satisfactions with the hybrid human-machine environment and, hence, bolster their productivity. By providing a reliable and efficient means to make existing workers better and safer at their jobs, our research not only increases their intrinsic value to manufacturing firms but also bolsters their satisfaction and fulfillment with their duties.

Future research will aim to better understand from a cognitive perspective how humans interact with virtual objects in a XR environment so as to be able to design a system which effectively and appropriately augments human performance through the detection of cognitive needs rather than through users’ deliberate actions. In these endeavors, it will be necessary to investigate virtual object interaction techniques in XR in terms of hand-based manipulation and precise selection mechanisms, and the extent to which they relate to existing literature. The central hypothesis is that relevant cognitive bottlenecks can be systematically identified and addressed by an intelligent cognitive architecture that receives input from XR hardware, biometric sensors, workplace data monitors, and intelligent machinery, and that this information can be leveraged to achieve effective human augmentation. The integrity of this hypothesis can be evaluated by examining behavioral reactions to XR systems using appropriate cognitive frameworks, including models exploring information processing via attentional control and working memory [1] and the postulated need and importance of humans to form mental models of task actions, context, and knowledge [30]. Using these frameworks as a guide to understand what users could be experiencing, we aim to document and capitalize on existing knowledge to extend the system to address specific deficits and shortcomings in human performance. The rationale behind conducting this future research is that a detailed knowledge of the possible modes of user interaction must be fully understood in order to allow for the design of the most effective XR infrastructures with our system. A systematic analysis of the incorporation of cognitive architectures into our XR system will allow intelligent machines to proactively assume new types of automation responsibilities, freeing cognitive resources in their human counterparts.

Notes

Acknowledgments

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the grant agreement no 723711 (MANUWORK) and from the Eusko Jaurlaritza/Gobierno Vasco under the grant no KK-2018/00071 (LANGILEO).

References

  1. 1.
    Baddeley A (2012) Working memory: theories, models, and controversies. Ann Rev Psychol 63:1–29CrossRefGoogle Scholar
  2. 2.
    Bangor A, Kortum P, Miller J (2009) Determining what individual sus scores mean: adding an adjective rating scale. J Usab Stud 4(3):114–123Google Scholar
  3. 3.
    Bangor A, Kortum PT, Miller JT (2008) An empirical evaluation of the system usability scale. Int J Human–Comput Interact 24(6):574–594CrossRefGoogle Scholar
  4. 4.
    Baumeister J, Ssin SY, ElSayed NA, Dorrian J, Webb DP, Walsh JA, Simon TM, Irlitti A, Smith RT, Kohler M et al (2017) Cognitive cost of using augmented reality displays. IEEE Trans Visual Comput Graph 23(11):2378–2388CrossRefGoogle Scholar
  5. 5.
    (BCG) WEFBCG (2018) Towards a reskilling revolution: a future of jobs for all. World Economic ForumGoogle Scholar
  6. 6.
    Bortolini M, Faccio M, Gamberi M, Pilati F (2017) Multi-objective assembly line balancing considering component picking and ergonomic risk. Comput Indus Eng 112:348–367CrossRefGoogle Scholar
  7. 7.
    Bortolini M, Ferrari E, Gamberi M, Pilati F, Faccio M (2017) Assembly system design in the industry 4.0 era: a general framework. IFAC-PapersOnLine 50(1):5700–5705CrossRefGoogle Scholar
  8. 8.
    Boud AC, Haniff DJ, Baber C, Steiner S (1999) Virtual reality and augmented reality as a training tool for assembly tasks. In: iv, p. 32. IEEEGoogle Scholar
  9. 9.
    Bray T (2017) The javascript object notation (json) data interchange format. Tech. rep., GoogleGoogle Scholar
  10. 10.
    Brogårdh T (2007) Present and future robot control development—an industrial perspective. Annu Rev Control 31(1):69–79CrossRefGoogle Scholar
  11. 11.
    Browne P (2009) JBoss drools business rules. Packt Publishing LtdGoogle Scholar
  12. 12.
    Caudell TP, Mizell DW (1992) Augmented reality: an application of heads-up display technology to manual manufacturing processes. In: Proceedings of the Twenty-Fifth Hawaii international conference on system sciences, 1992, vol 2. IEEE, pp 659–669Google Scholar
  13. 13.
    Curtis D (1998) Several devils in the details: making an ar application work in the airplane factory. In: Proceedings of IWAR’98, pp 47–60Google Scholar
  14. 14.
    De Amicis R, Ceruti A, Francia D, Frizziero L, Simões B (2018) Augmented reality for virtual user manual. Int J Interact Des Manuf (IJIDeM) 12(2):689–697CrossRefGoogle Scholar
  15. 15.
    ElMaraghy HA (2005) Flexible and reconfigurable manufacturing systems paradigms. Int J Flex Manuf Syst 17(4):261–276.  https://doi.org/10.1007/s10696-006-9028-7 CrossRefzbMATHGoogle Scholar
  16. 16.
    Faccio M, Gamberi M, Bortolini M, Pilati F (2018) Macro and micro-logistic aspects in defining the parts-feeding policy in mixed-model assembly systems. Int J Serv Oper Manag 31(4):433–462Google Scholar
  17. 17.
    Feiner S, Macintyre B, Seligmann D (1993) Knowledge-based augmented reality. Commun ACM 36 (7):53–62CrossRefGoogle Scholar
  18. 18.
    Fite-Georgel P (2011) Is there a reality in industrial augmented reality? In: 2011 10th IEEE International symposium on mixed and augmented reality (ISMAR). IEEE, pp 201–210Google Scholar
  19. 19.
    Funk M, Bächler A, Bächler L, Kosch T, Heidenreich T, Schmidt A (2017) Working with augmented reality?: a long-term analysis of in-situ instructions at the assembly workplace. In: Proceedings of the 10th international conference on pervasive technologies related to assistive environments, PETRA ’17. ACM, New York, pp 222–229,  https://doi.org/10.1145/3056540.3056548
  20. 20.
    Gauglitz S, Lee C, Turk M, Höllerer T (2012) Integrating the physical environment into mobile remote collaboration. In: Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services. ACM, pp 241–250Google Scholar
  21. 21.
    Glock CH, Grosse EH, Neumann WP, Sgarbossa F (2017) Editorial: human factors in industrial and logistic system design. Comput Indus Eng 111:463–466CrossRefGoogle Scholar
  22. 22.
    Grajewski D, Górski F, Zawadzki P, Hamrol A (2013) Application of virtual reality techniques in design of ergonomic manufacturing workplaces. Procedia Comput Sci 25:289–301CrossRefGoogle Scholar
  23. 23.
    Grosse EH, Glock CH, Jaber MY, Neumann WP (2015) Incorporating human factors in order picking planning models: framework and research opportunities. Int J Prod Res 53(3):695–717CrossRefGoogle Scholar
  24. 24.
    Grosse EH, Glock CH, Neumann WP (2017) Human factors in order picking: a content analysis of the literature. Int J Prod Res 55(5):1260–1276CrossRefGoogle Scholar
  25. 25.
    Haringer M, Regenbrecht H (2002) A pragmatic approach to augmented reality authoring. In: International symposium on mixed and augmented reality, 2002. ISMAR 2002. Proceedings. IEEE, pp 237–245Google Scholar
  26. 26.
    Henderson SJ, Feiner S (2009) Evaluating the benefits of augmented reality for task localization in maintenance of an armored personnel carrier turret. In: 8th IEEE International symposium on mixed and augmented reality, 2009. ISMAR 2009. IEEE, pp 135–144Google Scholar
  27. 27.
    Hořejší P (2015) Augmented reality system for virtual training of parts assembly. Procedia Eng 100:699–706CrossRefGoogle Scholar
  28. 28.
    Industrial augmented reality: industrial augmented reality — Wikipedia, the free encyclopedia (2019). en.wikipedia.org/wiki/Industrial_augmented_reality. [Online; accessed 04 Jan 2019]
  29. 29.
    Johannessen JA (2018) Automation, innovation and economic crisis: surviving the fourth industrial revolution. RoutledgeGoogle Scholar
  30. 30.
    Johnson-Laird PN (1980) Mental models in cognitive science. Cogn Sci 4(1):71–115CrossRefGoogle Scholar
  31. 31.
    Kato H (2019) Artoolkit. en.wikipedia.org/wiki/ARToolKit. [Online; Accessed 04 Jan 2019]
  32. 32.
    Kato H (2019) Vuforia — augmented reality for the industrial enterprise. https://www.vuforia.com. [Online; Accessed 04 Jan 2019]
  33. 33.
    Knopfle C, Weidenhausen J, Chauvigne L, Stock I (2005) Template based authoring for ar based service scenarios. In: Virtual reality, 2005. Proceedings. VR 2005. IEEE, pp 237–240Google Scholar
  34. 34.
    Kollatsch C, Schumann M, Klimant P, Wittstock V, Putz M (2014) Mobile augmented reality based monitoring of assembly lines. Procedia Cirp 23:246–251CrossRefGoogle Scholar
  35. 35.
    Lantegi Batuak: Estudio de impacto social de Lantegi Batuak en Bizkaia (2019) Accessed 3 Jan 2019). http://www.bizkaia21.eus/biblioteca_virtual/descargar_documento.asp?idDoc=572&idArea=3&idPagina=124
  36. 36.
    Lindgren R, Johnson-Glenberg M (2013) Emboldened by embodiment: six precepts for research on embodied learning and mixed reality. Educ Research 42(8):445–452CrossRefGoogle Scholar
  37. 37.
    Mann S, Furness T, Yuan Y, Iorio J, Wang Z (2018) All reality:, Virtual, augmented, mixed (x), mediated (x, y), and multimediated reality. arXiv:http://arXiv.org/abs/1804.08386
  38. 38.
    Manyika J, Lund S, Chui M, Bughin J, Woetzel J, Batra P, Ko R, Sanghvi S (2017) Jobs lost, jobs gained: workforce transitions in a time of automation. McKinsey Global InstituteGoogle Scholar
  39. 39.
    Milgram P, Colquhoun H (1999) A taxonomy of real and virtual world display integration. Mixed Real: Merging Real and Virtual Worlds 1:1–26Google Scholar
  40. 40.
    Mourtzis D, Zogopoulos V, Vlachou E (2017) Augmented reality application to support remote maintenance as a service in the robotics industry. Procedia CIRP 63:46–51CrossRefGoogle Scholar
  41. 41.
    Navab N (2004) Developing killer apps for industrial augmented reality. IEEE Comput Graph Appl 24(3):16–20CrossRefGoogle Scholar
  42. 42.
    Negri E, Fumagalli L, Macchi M (2017) A review of the roles of digital twin in cps-based production systems. Procedia Manuf 11:939–948CrossRefGoogle Scholar
  43. 43.
    Odenthal B, Mayer MP, Kabuß W, Kausch B, Schlick CM (2009) An empirical study of assembly error detection using an augmented vision system. In: Proceedings of the 3rd international conference on virtual and mixed reality: held as part of HCI international 2009, VMR ’09. Springer, Berlin, pp 596–604,  https://doi.org/10.1007/978-3-642-02771-0_66 Google Scholar
  44. 44.
    Parasuraman R, Sheridan TB, Wickens CD (2000) A model for types and levels of human interaction with automation. IEEE Trans Syst Man Cybern-Part A: Syst Humans 30(3):286–297CrossRefGoogle Scholar
  45. 45.
    Pathomaree N, Charoenseang S (2005) Augmented reality for skill transfer in assembly task. In: IEEE International workshop on robot and human interactive communication, 2005. ROMAN 2005. IEEE, pp 500–504Google Scholar
  46. 46.
    Peniche A, Diaz C, Trefftz H, Paramo G (2012) Combining virtual and augmented reality to improve the mechanical assembly training process in manufacturing. In: American Conference on applied mathematics, pp 292–297Google Scholar
  47. 47.
    Petersen N, Pagani A, Stricker D (2013) Real-time modeling and tracking manual workflows from first-person vision. In: 2013 IEEE International symposium on mixed and augmented reality (ISMAR), pp 117–124.  https://doi.org/10.1109/ISMAR.2013.6671771
  48. 48.
    Polson PG, Lewis C, Rieman J, Wharton C (1992) Cognitive walkthroughs: a method for theory-based evaluation of user interfaces. Int J Man-Mach Stud 36(5):741–773CrossRefGoogle Scholar
  49. 49.
    Posada J, Toro C, Barandiaran I, Oyarzun D, Stricker D, De Amicis R, Pinto EB, Eisert P, Döllner J, Vallarino I (2015) Visual computing as a key enabling technology for industrie 4.0 and industrial internet. IEEE Comput Graph Appl 35(2):26–40CrossRefGoogle Scholar
  50. 50.
    Posada J, Zorrilla M, Dominguez A, Simões B, Eisert P, Stricker D, Rambach J, Döllner J, Guevara M (2018) Graphics and media technologies for operators in industry 4.0. IEEE Comput Graph Appl 38(5):119–132CrossRefGoogle Scholar
  51. 51.
    Poupyrev I, Tan DS, Billinghurst M, Kato H, Regenbrecht H, Tetsutani N (2001) Tiles: a mixed reality authoring interface. In: Interact, vol 1, pp 334–341Google Scholar
  52. 52.
    Rodriguez L, Quint F, Gorecky D, Romero D, Siller HR (2015) Developing a mixed reality assistance system based on projection mapping technology for manual operations at assembly workstations. Procedia Comput Sci 75:327–333CrossRefGoogle Scholar
  53. 53.
    Rosati G, Faccio M, Carli A, Rossi A (2013) Fully flexible assembly systems (f-fas): a new concept in flexible automation. Assem Autom 33(1):8–21CrossRefGoogle Scholar
  54. 54.
    Sand O, Büttner S., Paelke V, Röcker C (2016) smart. assembly–projection-based augmented reality for supporting assembly workers. In: International conference on virtual, augmented and mixed reality. Springer, pp 643–652Google Scholar
  55. 55.
    Sauro J, Lewis JR (2011) When designing usability questionnaires, does it hurt to be positive? In: CHI. Citeseer, pp 2215– 2224Google Scholar
  56. 56.
    Schwald B, De Laval B (2003) An augmented reality system for training and assistance to maintenance in the industrial contextGoogle Scholar
  57. 57.
    Schwerdtfeger B, Klinker G (2008) Supporting order picking with augmented reality. In: 7th IEEE/ACM International symposium on mixed and augmented reality, 2008. ISMAR 2008. IEEE, pp 91–94Google Scholar
  58. 58.
    Segura Á, Diez HV, Barandiaran I, Arbelaiz A, Álvarez H, Simões B, Posada J, García-Alonso A, Ugarte R (2018) Visual computing technologies to support the operator 4.0. Computers & Industrial EngineeringGoogle Scholar
  59. 59.
    Simões B, Álvarez H, Segura A, Barandiaran I (2018) Unlocking augmented interactions in short-lived assembly tasks. In: The 13th International conference on soft computing models in industrial and environmental applications. Springer, pp 270–279Google Scholar
  60. 60.
    Simões B, De Amicis R, Barandiaran I, Posada J (2018) X-reality system architecture for industry 4.0 processes. Multimodal Technol Interact 2(4):72CrossRefGoogle Scholar
  61. 61.
    Söderberg R, Wärmefjord K, Carlson JS, Lindkvist L (2017) Toward a digital twin for real-time geometry assurance in individualized production. CIRP Ann 66(1):137–140CrossRefGoogle Scholar
  62. 62.
    Tang A, Owen C, Biocca F, Mou W (2003) Comparative effectiveness of augmented reality in object assembly. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, pp 73–80Google Scholar
  63. 63.
    Tarallo A, Mozzillo R, Di Gironimo G, De Amicis R (2018) A cyber-physical system for production monitoring of manual manufacturing processes. International Journal on Interactive Design and Manufacturing (IJIDeM), 1–7Google Scholar
  64. 64.
    Westerfield G, Mitrovic A, Billinghurst M (2015) Intelligent augmented reality training for motherboard assembly. Int J Artif Intell Educ 25(1):157–172CrossRefGoogle Scholar
  65. 65.
    Zhang X, Genc Y, Navab N (2001) Mobile computing and industrial augmented reality for real-time data access. In: 2001 8th IEEE International conference on emerging technologies and factory automation, 2001. Proceedings, vol 2. IEEE, pp 583–588Google Scholar
  66. 66.
    Zhong XW, Boulanger P, Georganas ND (2002) Collaborative augmented reality: a prototype for industrial training. In: 21th Biennial symposium on communication. Canada, pp 387–391Google Scholar

Copyright information

© The Author(s) 2019

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Bruno Simões
    • 1
    Email author
  • Raffaele De Amicis
    • 2
  • Iñigo Barandiaran
    • 1
  • Jorge Posada
    • 1
  1. 1.VicomtechDonostia/San SebastianSpain
  2. 2.Oregon State UniversityCorvallisUSA

Personalised recommendations