Keywords

1 Introduction

Nowadays, we experience a trend towards the development of digital assistive systems to support workers in industrial environments. This trend is caused by the ongoing digitization and automation and the growing complexity and heterogeneity of manufacturing processes and production plants [1]. In this context, digital assistance systems are intended to reduce the cognitive load of workers to make and keep complex manufacturing systems controllable. Current assistance systems for industrial applications thereby cover various fields of activity, such as assembly, maintenance, logistic and training [17]. However, the number of industrial assistance systems in the field of maintenance, logistics or training is comparatively lower compared to the number of systems in the field of assembly. In the last few years, scientific developments in this context are increasingly relying on augmented reality (AR) devices such as tablet-PCs, in-situ projections or head-mounted displays (HMDs), which are able to enrich the user’s field of view with digital information and virtual objects [2, 3]. Additionally, recent advantages in the field of depth sensor technologies open up various possibilities of creating context-aware systems and interaction methods [4]. While stationary assistance systems for assembly activities are relatively widespread, the number of systems to support maintenance activities and repair tasks proves to be comparatively low. A possible reason could be the fact that maintenance and repair tasks require both a mobile approach and a specific adaptation to changing environments and machine types, while assembly systems are usually limited to a specific workplace and thus to a fixed environment.

Therefore, in this paper, we will present a concept for a multi-device assistive system to support users on maintenance tasks in the industrial environment. In Sect. 2, we will take a look at current state-of-the-art assistive systems for maintenance applications in industrial environments. In Sect. 3, we will introduce our concept for a multi-device assistive system. In Sect. 4 we present the status of our current prototypical implementation of multi-device assistive system. In Sect. 5 we finally provide a conclusion and an outlook on future research activities in the context of this paper.

2 Related Work

In contrast to assistance systems for assembly tasks, the development of assistance systems for maintenance operations requires a considerably higher variability. This is justified by the fact, that maintenance tasks represent mobile activities which are carried out in different environments and at different production plants. As shown in Fig. 1, maintenance or repair processes can generally be divided into three sections: the information phase (Phase 1), the processing phase (Phase 2) and the documentation phase (Phase 3). While the information phase concerns the collection of information to a related task, the processing phase represents the actual execution of the maintenance activity or repair process at a production system. The documentation phase is finally used to document the results of a maintenance or repair process to finish the overall operation.

Fig. 1.
figure 1

Overview of the different phases of a maintenance or repair process and the related activities.

Visualization technologies used in current assistive systems, such as tablet PCs, in-situ projections and HMDs have different advantages and disadvantages regarding their application for activities and situations in industrial environments.

In this context, previous studies like presented by Funk et al. [5] and Büttner et al. [18] towards the evaluation of different types of devices for the implementation of assistance systems for assembly tasks revealed, that in-situ projections can offer a better support compared to HMDs and Tablet-PCs. However, since these systems usually follow a stationary design based on assembly tables with fixed dimensions, they are not really applicable for mobile scenarios such as maintenance operations which are very likely to be performed at different production systems in different locations. In addition, mobile systems based on in-situ projections currently exist only in the form of niche developments such as the projector helmet presented by Funk et al. [6], the TeleAdvisor introduced by Gurevich et al. [7] or the semi portable MagicMirror system introduced by Fiorentino et al. [8]. Therefore, mobile assistive systems have to be build up on one of the other visualization technologies such as HMDs and tablet PCs.

Zheng et al. [9] and Aromaa et al. [2] evaluated the efficiency of paper based instructions and different devices like tablet PCs and HMDs to assist workers during maintenance operations. The results of these studies show that HMDs do not have significant advantages over other devices, such as tablet PCs, in terms of the completion time and the number of errors in the processing of a maintenance task. But we have to point out, that these studies only focus on the performance of a maintenance task itself, and do not evaluate the gathering of information about the machine nor the phase of documentation where the result of the operation is recorded. These phases, however, require a distinct kind of information presentation and interaction design to provide complex information. At the same time, the system must be able to provide a common viewing and processing of information by several persons in order to allow a coordination between the worker and a contact person.

In this sense, devices such as smartphones, tablet PCs, or laptops are very likely to provide an adequate way to view information about the details of a maintenance task and represent an ideal tool for documenting the results of such processes. But on the other hand, these systems do not allow to work hands-free without switching the attention between the display of the device and the location of interest. Compared with these systems, AR-based HMDs prove to be more advantageous for the processing phase because of their possibility to work free handed. But they are not efficient in viewing and processing complex data and do not allow to share information between multiple users.

In addition, the acceptance of users in relation to the technologies used is also to be observed, as this has a significant influence on the usability and user experience of the entire system. While touch-based systems are now widely used in everyday life as well as in the industrial environment, the proportion of HMDs used in the industry is still rather low to non-existent. But in the future, this circumstance might change since current developments in this area are subject to rapid progress.

3 A Concept for Multi-device Assistive Systems

For the implementation of an assistance system based on a combination of different devices and technologies, there are various requirements regarding the communication, the handling and processing of data streams, the handling of different devices and users as well as the synchronization between different devices. Figure 2 shows a general concept for the implementation of such a system including multiple devices, a local server, a logistic software system, a production plant and external sensor systems. In this concept, the server acts as a communicator between the digital infrastructure of a production facility and the different devices. It handles incoming and outgoing data streams, controls device communications, holds relevant information about different tasks and related media content as well as it allocates devices to specific users.

Fig. 2.
figure 2

Overview on the concept of a multi-device assistive system.

3.1 Assistive Devices

During the last years, a lot of devices based on numerous technologies have been developed and evaluated to provide users with information over different sensory channels. Today, mobile devices like smartphones, Tablet-PCs or Laptops are the most frequently used devices for mobile applications. But the latest developments in the area of AR-based HMDs show great potential for future applications in industrial environments. In contrast to previous assistance systems, which are usually limited to a single device type, our approach follows a combination of different systems to their respective advantages and disadvantages in terms of different situations and activities.

As stated in Sect. 2, there are currently no existing devices that can perform interactive in-situ projections in mobile applications. Therefore, the choice of applicable devices falls primarily on the use of Tablet PCs, laptops and HMDs supported by various wearables like smartwatches or other tactile wearable devices like work gloves [10, 11], bracelets [12, 13] or shoes [14, 15].

3.2 Device Communication

In order to integrate different types of devices into the overall system, a unified communication has to be implemented, which enables the development of programs for different operating systems and device types. The basis for this is a special data structure, which allows a transfer of the different content and media formats and on the other a communication protocol which of as many potentially usable devices is implemented. A protocol which is implemented in most devices with a wired or wireless network adapter is the Transmission Control Protocol/Internet Protocol (TCP/IP) which allows two devices to communicate via a serial connection over a specific port. The common object-based data structures for the transmission between the server and the devices are primarily the Extensible Markup Language (XML) format or JavaScript Object Notation (JSON) format, as they are supported by various programming languages and systems [16]. Since the text-based instructions of current assistance systems are usually extended by different media formats, such as pictures or videos, it is also necessary to transfer these files from the server system to the different devices. A transmission of these files via a JSON or XML structure would be inefficient and time-consuming because each file would have to be transformed into textual information for the transfer. A potential solution for the deploying of various media files via a network connection could be the implement of a Representational State Transfer-API (REST-API). This software, implemented on the server side, allows to make files and further information accessible to other devices via a specific network address.

3.3 User Management

Since it is possible that several workers use the same equipment or several maintenance activities are carried out at the same time, the server system must be able to allocate devices and maintenance processes to a specific user. This is realized by an individual user ID which is transmitted during communication between the server and the individual devices in order to identify the current user.

3.4 Device Synchronisation

To allow users to switch between different devices during a maintenance process without performing further adjustments requires an efficient synchronization mechanism. The synchronization is performed by the server, which holds the information about which devices are used by a single user. At each interaction on one of the devices, a message is send to the server to ask for the data to be shown in a next or previous step. The server then sends this new data package to each device with the matching user ID.

4 Prototypical Implementation

Our prototypical implementation of a multi-device assistive system aims to support users on maintenance operations and repair tasks at a laundry folding machine. This machine is normally used in industrial laundries to fold large amounts of hotel linen and towels. The assistive system thereby provides workers with step-by-step instructions and further information about the machine and the environment. The system is located in the SmartFactoryOWL, a demonstrator facility for industrial research projects in the scope of digitization and automation in Lemgo, Germany [19].

The final assistive system will consist of an ordinary mini-PC, acting as a local server-system, an AR-based HMD (Microsoft HoloLensFootnote 1), a tablet-PC, a smartphone and custom wearable devices which are used to support users during different phases of a maintenance operation. The server provides a REST API for reading media content, as well as a TCP/IP socket connection for transmitting control information and text-based instructions, as well as the links for the associated media contents. By transmitting just the links for media files on the REST server, the usage of this content lies in the hand of the visualization software of the different devices. For the transmission of control information and text-based data, we used a specific JSON structure to provide step-by-step instructions as well as further information about the design and position of virtual objects, relevant machine data or the position of the user (see Fig. 3).

Fig. 3.
figure 3

Example of the data structure for a step as part of a step-by-step instruction for augmented reality devices.

The server has access to the internal database of the folding machine and can thus collect detailed information about the machine status and messages such as warnings or errors. In addition, the server is also able to read the states of the light sensors inside the machine in order to check the proper operation of a folding process. These two datasets are then used to choose the related set of instructions to perform a maintenance or repair process.

The software for the tablet-PC was developed as a 2D application for Windows-based operating systems. In this way, it works on Windows-based tablet-PCs as well as laptops or stationary PCs. It allows to visualize step-by-step instructions and different media files and is also able to display runtime data and sensor information from the folding machine.

The software for the AR-based HMD was developed via the Unity Game Development PlatformFootnote 2 and presents step-by-step instructions for maintenance activities such as the cleaning of light sensors at the folding machine (Fig. 4). The system initially scans an QR-Code to set the global origin for its coordinate system. In this way, the application can be used at any machine of the same type as long as the QR-Code is placed at the same spot. Afterwards, the user can choose an instruction from a menu. The step-by-step process of the application consists of a main window, which is placed on the center above the machine. This window shows textual information about the current step of a tutorial extended by an image of the related part on the machine and allows the user to switch between the different steps. In order to guide the user to the right place, virtual objects such as animated arrows or highlighted planes are placed on relevant parts of the machine. When the user walks around the machine, the main windows will further automatically adjust its orientation to face the position of the user.

Fig. 4.
figure 4

Example views from the AR-based HMD software.

5 Conclusion and Future Work

In this paper, we presented a general concept for an assistive system to support users on various phases of maintenance operations at industrial production systems. Contrary to previous publications, our Concept thereby follows a multi-device approach to take advantage of different device technologies in different situations. Our prototypical implementation of such a system is still under development, but it already shows potential to be a good solution to be also applied to multiple machines.

We consider our concept and the prototypical implementation as a first step in order to provide a basis for a framework to integrate mobile assistance systems in the industrial environment. In the future, the existing system will be continuously improved and extended by various systems and information sources in order to provide additional information. Furthermore, we will evaluate different combinations of devices and technologies as well as different visualization methods with regard to their usability, user experience and productivity by performing extensive user studies and questionnaires.