Abstract
Recent advances in the field of industrial digitization and automation lead to an increasing need for assistance systems to support workers in various fields of activity, such as assembly, logistics and maintenance. Current assistance systems for the maintenance area are usually based on a single visualization technology. However, in our view, this is not practicable in terms of real activities, as these operations involve various subtasks for which different interaction concepts would be advantageous. Therefore, in this paper, we propose a concept for a multi-device assistive system, which combines multiple devices to provide workers with relevant information over different subtasks of a maintenance operation and present our first prototype for such a system.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Nowadays, we experience a trend towards the development of digital assistive systems to support workers in industrial environments. This trend is caused by the ongoing digitization and automation and the growing complexity and heterogeneity of manufacturing processes and production plants [1]. In this context, digital assistance systems are intended to reduce the cognitive load of workers to make and keep complex manufacturing systems controllable. Current assistance systems for industrial applications thereby cover various fields of activity, such as assembly, maintenance, logistic and training [17]. However, the number of industrial assistance systems in the field of maintenance, logistics or training is comparatively lower compared to the number of systems in the field of assembly. In the last few years, scientific developments in this context are increasingly relying on augmented reality (AR) devices such as tablet-PCs, in-situ projections or head-mounted displays (HMDs), which are able to enrich the user’s field of view with digital information and virtual objects [2, 3]. Additionally, recent advantages in the field of depth sensor technologies open up various possibilities of creating context-aware systems and interaction methods [4]. While stationary assistance systems for assembly activities are relatively widespread, the number of systems to support maintenance activities and repair tasks proves to be comparatively low. A possible reason could be the fact that maintenance and repair tasks require both a mobile approach and a specific adaptation to changing environments and machine types, while assembly systems are usually limited to a specific workplace and thus to a fixed environment.
Therefore, in this paper, we will present a concept for a multi-device assistive system to support users on maintenance tasks in the industrial environment. In Sect. 2, we will take a look at current state-of-the-art assistive systems for maintenance applications in industrial environments. In Sect. 3, we will introduce our concept for a multi-device assistive system. In Sect. 4 we present the status of our current prototypical implementation of multi-device assistive system. In Sect. 5 we finally provide a conclusion and an outlook on future research activities in the context of this paper.
2 Related Work
In contrast to assistance systems for assembly tasks, the development of assistance systems for maintenance operations requires a considerably higher variability. This is justified by the fact, that maintenance tasks represent mobile activities which are carried out in different environments and at different production plants. As shown in Fig. 1, maintenance or repair processes can generally be divided into three sections: the information phase (Phase 1), the processing phase (Phase 2) and the documentation phase (Phase 3). While the information phase concerns the collection of information to a related task, the processing phase represents the actual execution of the maintenance activity or repair process at a production system. The documentation phase is finally used to document the results of a maintenance or repair process to finish the overall operation.
Visualization technologies used in current assistive systems, such as tablet PCs, in-situ projections and HMDs have different advantages and disadvantages regarding their application for activities and situations in industrial environments.
In this context, previous studies like presented by Funk et al. [5] and Büttner et al. [18] towards the evaluation of different types of devices for the implementation of assistance systems for assembly tasks revealed, that in-situ projections can offer a better support compared to HMDs and Tablet-PCs. However, since these systems usually follow a stationary design based on assembly tables with fixed dimensions, they are not really applicable for mobile scenarios such as maintenance operations which are very likely to be performed at different production systems in different locations. In addition, mobile systems based on in-situ projections currently exist only in the form of niche developments such as the projector helmet presented by Funk et al. [6], the TeleAdvisor introduced by Gurevich et al. [7] or the semi portable MagicMirror system introduced by Fiorentino et al. [8]. Therefore, mobile assistive systems have to be build up on one of the other visualization technologies such as HMDs and tablet PCs.
Zheng et al. [9] and Aromaa et al. [2] evaluated the efficiency of paper based instructions and different devices like tablet PCs and HMDs to assist workers during maintenance operations. The results of these studies show that HMDs do not have significant advantages over other devices, such as tablet PCs, in terms of the completion time and the number of errors in the processing of a maintenance task. But we have to point out, that these studies only focus on the performance of a maintenance task itself, and do not evaluate the gathering of information about the machine nor the phase of documentation where the result of the operation is recorded. These phases, however, require a distinct kind of information presentation and interaction design to provide complex information. At the same time, the system must be able to provide a common viewing and processing of information by several persons in order to allow a coordination between the worker and a contact person.
In this sense, devices such as smartphones, tablet PCs, or laptops are very likely to provide an adequate way to view information about the details of a maintenance task and represent an ideal tool for documenting the results of such processes. But on the other hand, these systems do not allow to work hands-free without switching the attention between the display of the device and the location of interest. Compared with these systems, AR-based HMDs prove to be more advantageous for the processing phase because of their possibility to work free handed. But they are not efficient in viewing and processing complex data and do not allow to share information between multiple users.
In addition, the acceptance of users in relation to the technologies used is also to be observed, as this has a significant influence on the usability and user experience of the entire system. While touch-based systems are now widely used in everyday life as well as in the industrial environment, the proportion of HMDs used in the industry is still rather low to non-existent. But in the future, this circumstance might change since current developments in this area are subject to rapid progress.
3 A Concept for Multi-device Assistive Systems
For the implementation of an assistance system based on a combination of different devices and technologies, there are various requirements regarding the communication, the handling and processing of data streams, the handling of different devices and users as well as the synchronization between different devices. Figure 2 shows a general concept for the implementation of such a system including multiple devices, a local server, a logistic software system, a production plant and external sensor systems. In this concept, the server acts as a communicator between the digital infrastructure of a production facility and the different devices. It handles incoming and outgoing data streams, controls device communications, holds relevant information about different tasks and related media content as well as it allocates devices to specific users.
3.1 Assistive Devices
During the last years, a lot of devices based on numerous technologies have been developed and evaluated to provide users with information over different sensory channels. Today, mobile devices like smartphones, Tablet-PCs or Laptops are the most frequently used devices for mobile applications. But the latest developments in the area of AR-based HMDs show great potential for future applications in industrial environments. In contrast to previous assistance systems, which are usually limited to a single device type, our approach follows a combination of different systems to their respective advantages and disadvantages in terms of different situations and activities.
As stated in Sect. 2, there are currently no existing devices that can perform interactive in-situ projections in mobile applications. Therefore, the choice of applicable devices falls primarily on the use of Tablet PCs, laptops and HMDs supported by various wearables like smartwatches or other tactile wearable devices like work gloves [10, 11], bracelets [12, 13] or shoes [14, 15].
3.2 Device Communication
In order to integrate different types of devices into the overall system, a unified communication has to be implemented, which enables the development of programs for different operating systems and device types. The basis for this is a special data structure, which allows a transfer of the different content and media formats and on the other a communication protocol which of as many potentially usable devices is implemented. A protocol which is implemented in most devices with a wired or wireless network adapter is the Transmission Control Protocol/Internet Protocol (TCP/IP) which allows two devices to communicate via a serial connection over a specific port. The common object-based data structures for the transmission between the server and the devices are primarily the Extensible Markup Language (XML) format or JavaScript Object Notation (JSON) format, as they are supported by various programming languages and systems [16]. Since the text-based instructions of current assistance systems are usually extended by different media formats, such as pictures or videos, it is also necessary to transfer these files from the server system to the different devices. A transmission of these files via a JSON or XML structure would be inefficient and time-consuming because each file would have to be transformed into textual information for the transfer. A potential solution for the deploying of various media files via a network connection could be the implement of a Representational State Transfer-API (REST-API). This software, implemented on the server side, allows to make files and further information accessible to other devices via a specific network address.
3.3 User Management
Since it is possible that several workers use the same equipment or several maintenance activities are carried out at the same time, the server system must be able to allocate devices and maintenance processes to a specific user. This is realized by an individual user ID which is transmitted during communication between the server and the individual devices in order to identify the current user.
3.4 Device Synchronisation
To allow users to switch between different devices during a maintenance process without performing further adjustments requires an efficient synchronization mechanism. The synchronization is performed by the server, which holds the information about which devices are used by a single user. At each interaction on one of the devices, a message is send to the server to ask for the data to be shown in a next or previous step. The server then sends this new data package to each device with the matching user ID.
4 Prototypical Implementation
Our prototypical implementation of a multi-device assistive system aims to support users on maintenance operations and repair tasks at a laundry folding machine. This machine is normally used in industrial laundries to fold large amounts of hotel linen and towels. The assistive system thereby provides workers with step-by-step instructions and further information about the machine and the environment. The system is located in the SmartFactoryOWL, a demonstrator facility for industrial research projects in the scope of digitization and automation in Lemgo, Germany [19].
The final assistive system will consist of an ordinary mini-PC, acting as a local server-system, an AR-based HMD (Microsoft HoloLensFootnote 1), a tablet-PC, a smartphone and custom wearable devices which are used to support users during different phases of a maintenance operation. The server provides a REST API for reading media content, as well as a TCP/IP socket connection for transmitting control information and text-based instructions, as well as the links for the associated media contents. By transmitting just the links for media files on the REST server, the usage of this content lies in the hand of the visualization software of the different devices. For the transmission of control information and text-based data, we used a specific JSON structure to provide step-by-step instructions as well as further information about the design and position of virtual objects, relevant machine data or the position of the user (see Fig. 3).
The server has access to the internal database of the folding machine and can thus collect detailed information about the machine status and messages such as warnings or errors. In addition, the server is also able to read the states of the light sensors inside the machine in order to check the proper operation of a folding process. These two datasets are then used to choose the related set of instructions to perform a maintenance or repair process.
The software for the tablet-PC was developed as a 2D application for Windows-based operating systems. In this way, it works on Windows-based tablet-PCs as well as laptops or stationary PCs. It allows to visualize step-by-step instructions and different media files and is also able to display runtime data and sensor information from the folding machine.
The software for the AR-based HMD was developed via the Unity Game Development PlatformFootnote 2 and presents step-by-step instructions for maintenance activities such as the cleaning of light sensors at the folding machine (Fig. 4). The system initially scans an QR-Code to set the global origin for its coordinate system. In this way, the application can be used at any machine of the same type as long as the QR-Code is placed at the same spot. Afterwards, the user can choose an instruction from a menu. The step-by-step process of the application consists of a main window, which is placed on the center above the machine. This window shows textual information about the current step of a tutorial extended by an image of the related part on the machine and allows the user to switch between the different steps. In order to guide the user to the right place, virtual objects such as animated arrows or highlighted planes are placed on relevant parts of the machine. When the user walks around the machine, the main windows will further automatically adjust its orientation to face the position of the user.
5 Conclusion and Future Work
In this paper, we presented a general concept for an assistive system to support users on various phases of maintenance operations at industrial production systems. Contrary to previous publications, our Concept thereby follows a multi-device approach to take advantage of different device technologies in different situations. Our prototypical implementation of such a system is still under development, but it already shows potential to be a good solution to be also applied to multiple machines.
We consider our concept and the prototypical implementation as a first step in order to provide a basis for a framework to integrate mobile assistance systems in the industrial environment. In the future, the existing system will be continuously improved and extended by various systems and information sources in order to provide additional information. Furthermore, we will evaluate different combinations of devices and technologies as well as different visualization methods with regard to their usability, user experience and productivity by performing extensive user studies and questionnaires.
References
Radziwon, A., Bilberg, A., Bogers, M., Madsen, E.S.: The smart factory: exploring adaptive and flexible manufacturing solutions. Procedia Eng. 69, 1184–1190 (2014)
Aromaa, S., Aaltonen, I., Kaasinen, E., Elo, J., Parkkinen, I.: Use of wearable and augmented reality technologies in industrial maintenance work. In: Proceedings of the 20th International Academic Mindtrek Conference, pp. 235–242. ACM (2016)
Fite-Georgel, P.: Is there a reality in industrial augmented reality? In: 2011 10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 201–210. IEEE (2011)
Izadi, S., et al.: KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, pp. 559–568. ACM (2011)
Funk, M., Kosch, T., Schmidt, A.: Interactive worker assistance: comparing the effects of in-situ projection, head-mounted displays, tablet, and paper instructions. In: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 934–939. ACM (2016)
Funk, M., Mayer, S., Nistor, M., Schmidt, A.: Mobile in-situ pick-by-vision: order picking support using a projector helmet. In: Proceedings of the 9th ACM International Conference on PErvasive Technologies Related to Assistive Environments, p. 45. ACM (2016)
Gurevich, P., Lanir, J., Cohen, B., Stone, R.: TeleAdvisor: a versatile augmented reality tool for remote assistance. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 619–622. ACM (2012)
Fiorentino, M., Radkowski, R., Boccaccio, A., Uva, A.E.: Magic mirror interface for augmented reality maintenance: an automotive case study. In: Proceedings of the International Working Conference on Advanced Visual Interface, pp. 160–167. ACM (2016)
Zheng, X.S., Foucault, C., Matos da Silva, P., Dasari, S., Yang, T., Goose, S.: Eye-wearable technology for machine maintenance: effects of display position and hands-free operation. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 2125–2134. ACM (2015)
Hsieh, Y.-T., Jylhä, A., Jacucci, G.: Pointing and selecting with tactile glove in 3D environment. In: Jacucci, G., Gamberini, L., Freeman, J., Spagnolli, A. (eds.) Symbiotic 2014. LNCS, vol. 8820, pp. 133–137. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-13500-7_12
Moy, G., Wagner, C., Fearing, R.S.: A compliant tactile display for teletaction. In: Proceedings 2000, IEEE International Conference on Robotics and Automation, ICRA 2000, vol. 4, pp. 3409–3415. IEEE (2000)
Matscheko, M., Ferscha, A., Riener, A., Lehner, M.: Tactor placement in wrist worn wearables. In: 2010 International Symposium on Wearable Computers (ISWC), pp. 1–8. IEEE (2010)
Brock, A., Kammoun, S., Macé, M., Jouffrais, C.: Using wrist vibrations to guide hand movement and whole body navigation. i-com 13(3), 19–28 (2014)
Fu, X., Li, D.: Haptic shoes: representing information by vibration. In: Proceedings of the 2005 Asia-Pacific Symposium on Information Visualisation, vol. 45, pp. 47–50. Australian Computer Society, Inc (2005)
Xu, Q., Gan, T., Chia, S.C., Li, L., Lim, J.H., Kyaw, P.K.: Design and evaluation of vibrating footwear for navigation assistance to visually impaired people. In: 2016 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), pp. 305–310. IEEE (2016)
Nurseitov, N., Paulson, M., Reynolds, R., Izurieta, C.: Comparison of JSON and XML data interchange formats: a case study. Caine 9, 157–162 (2009)
Büttner, S., et al: The design space of augmented and virtual reality applications for assistive environments in manufacturing: a visual approach. In: Proceedings of the 10th International Conference on Pervasive Technologies Related to Assistive Environments, pp. 433–440. ACM (2017)
Büttner, S., Funk, M., Sand, O., Röcker, C.: Using head-mounted displays and in-situ projection for assistive systems: a comparison. In: Proceedings of the 9th ACM International Conference on Pervasive Technologies Related to Assistive Environments, p. 44. ACM (2016)
Büttner, S., Mucha, H., Robert, S., Hellweg, F., Röcker, C.: HCI in der SmartFactoryOWL–Angewandte Forschung & Entwicklung. Mensch und Computer 2017-Workshopband (2017)
Acknowledgements
This work is funded by the German Federal Ministry of Education and Research (BMBF) for project ADIMA under grant number 13FH019PX5.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 IFIP International Federation for Information Processing
About this paper
Cite this paper
Heinz, M., Dhiman, H., Röcker, C. (2018). A Multi-device Assistive System for Industrial Maintenance Operations. In: Holzinger, A., Kieseberg, P., Tjoa, A., Weippl, E. (eds) Machine Learning and Knowledge Extraction. CD-MAKE 2018. Lecture Notes in Computer Science(), vol 11015. Springer, Cham. https://doi.org/10.1007/978-3-319-99740-7_16
Download citation
DOI: https://doi.org/10.1007/978-3-319-99740-7_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-99739-1
Online ISBN: 978-3-319-99740-7
eBook Packages: Computer ScienceComputer Science (R0)