Keywords

1 Introduction

Human-robot collaboration (HRC) is an important research topic in the current aim to use autonomous and adaptive manufacturing due to its flexible application possibilities [16, 24]. A main disadvantage of traditional industrial robotic systems is the necessity of fences and static safety configurations with high space consumption, which makes the integration in adaptive and flexible production systems challenging [16]. The usage of collaborative robots is not limited to the direct interaction of human and robots, it also includes flexible safety functions, easy relocation, and less space consumption [24]. HRC requires safety to be applicable in industrial environments. The related work presents different safety methods for different levels of human-robot interaction. The level of interaction is often differentiated into coexistence, cooperation, and collaboration [3, 16, 24]. In a coexistence, humans and robots share the same environment but do not interact directly. In a cooperation, humans and robots work in the same workspace but on different tasks, and in a collaboration, the human and robot execute a task together. The safety methods are differentiated in collision avoidance and collision detection [20, 23, 27] or similar categories [28, 30]. Collision detection is required for collaboration when humans and robots interact directly or share the same workspace for cooperation because of the possible contact between a workpiece and a human. Thereby the impact of an injury through a collision must be limited. Collision avoidance is sufficient for coexistence because no direct contact between humans and robots is required and demanded. Besides the research, standards exist for handling safety in HRC. The EN ISO 10218 [13] presents three safety methods for collaborative interaction: Hand-guided control, speed and separation monitoring (SSM) as well as power and force limiting.

Collaborative robots like Bosch APAS, Universal Robot, Kuka LBR iiwa and Franka Emika are designed for direct interactions between humans and robots. Therefore, safety is gained by limiting the injury while colliding with lightweight structures or sensorized skins. However, the payload is typically limited to less than 20 kg due to the lightweight structures. Thus, these kinds of robots are flexible in safety but limited to specific use cases with low payload requirements. According to the number of robot installations in 2020, the market share of collaborative robots is small with five percent [1]. Using traditional industrial robotic systems for human-robot collaborations could enhance the market share of collaborative robots because their limitation to specific use cases could be revoked. Thereby, the most common applications of industrial robots are handling and welding [1] which do not require direct interaction between humans and robots. Instead, a safe coexistence between humans and robots is sufficient. New safety functions are required to enable adaptive and flexible application of traditional industrial robotic systems in manufacturing during the coexistence with humans.

To the best of the authors knowledge, there are only a few products on the market which try to make traditional robotic systems’ safety more flexible. The SafetyEye from Pilz uses a camera-based vision system where static danger zones can be defined. No fences are required, but the flexibility is limited to the configuration time [18]. The system enables SIL3 solutions. Another product is the INXPECT Radar sensor, targeting static danger zones with SIL2 or PLd. These solutions are more flexible during configuration but still require much space. Therefore, this paper presents a flexible safety method based on speed and separation monitoring. In contrast to the other products and research presented in Sect. 2, this new approach monitors the position and speed of humans and robots to calculate the required separation. The distance is lower compared to worst-case assumptions where no speed or position is monitored. The monitoring of robots and humans enables a more efficient space usage which is shown in a simulation in 4. Furthermore, the boundary conditions for the design and implementation of such a system is shown in 5.

2 Related Work

The problem of safety in human-robot coexistence has been addressed for many years. In contrast to the low number of certified products on the market, different approaches can be found in research, which are based on different sensors and safety functions. Various literature reviews give an overview of the different approaches. Eight reviews have been identified [3, 12, 20, 22, 23, 27, 28, 30] by searching with ((human AND robot) OR (human-robot)) AND (safety OR safe) AND (review OR survey)) in Web of ScienceFootnote 1 and ScopusFootnote 2. The reviews state different approaches for collision avoidance like estimating the intention of a human to modify the robot movement [12, 20]. Moreover, distance determination between humans and robots are used to change the trajectory of the robot, as in [9, 10]. All papers related to SSM in these reviews have been analyzed. Within these approaches, many of them rely on the distance between the robot and obstacle like [29, 31]. Others use additional sensors for tracking the human or obstacle like [5, 7, 11]. Many researches define static zones around the robot, like [5, 25]. Others take the velocities into account, like [9] who presents collision avoidance by changing the trajectory of the robot with control barrier functions that include the position and velocity of the human.

Four publications are similar to this work’s approach. [2] track humans with a Kinect V2 3D-RGB camera and calculate the required safety distance based on TS/ISO 15066. They only monitor the tool center point of the robot instead of a more complex model. By simulation, they show fewer safety stops compared to traditional safety systems. [15] want to enhance the productivity of collaborative tasks by minimizing the degraded state of the robot due to the safety functions. They modify the standard SSM by calculating a safety threshold in real-time, based on the position and velocity of humans and robots. However their focus lays on cooperation and collaboration. Furthermore they present a risk classification based on the SSM. [17] track the movement of humans and the robot and transfer them into a physical simulation. The humans are tracked with multiple Kinect cameras. Spheres extend the joint model. The robot’s position and future trajectory are tracked too. The simulation software checks a collision based on the current human position and future robot trajectory. Possible movements of humans are not considered. The main contribution of [17] is the camera-based human motion tracking. In [26] the dynamic calculation of SSM based on the tracking of humans and the robot is analyzed. They present a so called trajectory-dependent safety distance, which refers to the robots trajectory. Details on the human modeling are not given. They show less required safety distance when the robot moves away from an obstacle. All four works do not focus on the reeducation of required space. Furthermore [17] and [26] use lightweight robots. Only in [15] with the ABB IRB1200 a industrial robot is used, but the IRB1200 is still a lightweight robot which is limited to a low payload.

3 Accurate Speed and Separation Monitoring

The SSM increases safety for HRC by monitoring the separation of humans and robots based on the position and speed of both, humans and robots. This section explains the SSM according to the norm and introduces the modification of this work’s approach where monitored values over worst-case assumptions are used. This can lower the required safe space.

3.1 Calculation According to the Norm

The EN ISO 10218 names SSM as a possible safety function to enable HRC but does not detail the implementation of such a safety function. However, the ISO/TS 15066:2016 [14] is a technical specification that guides the implementation of SSM. The required separation distance according to the ISO/TS 15066:2016 sec. 5.5.4.2.3 is (1).

$$\begin{aligned} S(t) = S_h(t) + S_r(t) + S_s(t) + C + Z_d + Z_r \end{aligned}$$
(1)

The functions and variables are as follows [14]:

  • \(S_h\) is the contribution to the protective separation distance attributable to the operator’s change in location

  • \(S_r\) is the contribution to the protective separation distance attributable to the robot system’s reaction time

  • \(S_s\) is the contribution to the protective separation distance due to the robot system’s stopping distance

  • C is the intrusion distance, as defined in ISO 13855; this is the distance that a part of the body can intrude into the sensing field before it is detected

  • \(Z_d\) is the position uncertainty of the operator in the collaborative workspace, as measured by the presence sensing device resulting from the sensing system measurement tolerance

  • \(Z_r\) is the position uncertainty of the robot system, resulting from the accuracy of the robot position measurement system.

The separation distance S must be calculated by considering all human and robot parts. The technical specification leaves this open to the user. Therefore a model for humans and robots is required first.

3.2 Human and Robot Modeling

Some approaches use two points for the hand of the human and the tool center point of the robot, like [8], which is not accurate enough as other parts of human and robot could be closer. Other approaches like [4] are based on geometrical models, but the formula can not be used with them. Others use point clouds generated from the sensor’s output like [7]. Others apply a joint link model like [9] where humans and robots are modeled by joints and the links between the joints. As shown in Fig. 1, this work also uses a joint model extended with spheres around the joints. This model is a good compromise between accuracy and calculation effort.

Fig. 1.
figure 1

Joint model of the human and robot

With the joint model, the separation distance as the minimum of the distances between each link of the robot \(j_j\) and each link of the human \(j_i\) by \(S = min(S_{i,j})\) can be calculated.

3.3 Implementation Details and Monitoring Differences

The contribution of the variables to the required separation distance is based on the possibility of monitoring the operator and robot. If no monitoring is possible, a worst-case assumption must be made.

\(S_h\) is expressed in [14] as \(S_h(t) = \int _{t_0}^{t_0+T_r+T_s}v_h(t) dt\) where \(v_h(t)\) is the velocity of the human in the direction of the robot. \(T_r\) is the reaction time of the whole system including the time for detecting the human, processing the signals, and activating the robot stop. A static time is assumed, due to the real-time requirement with deterministic behavior for safety functions. The time depends on the used sensors, algorithms, hardware and robot and must be identified empirically. \(T_s\) is the time for stopping the robot. The stopping time of the robot relies on the velocity, pose and payload of the robot and is identified empirically by the robot manufacturer. If the velocity of the human can be monitored the velocity is only known for the current time \(t_o\). Some approaches [12] try to estimate the intention of the human operator which could be used to make a estimation of the future velocity \(v_h\). In this work a more conservative method by assuming a maximum acceleration additional to the the current speed so that \(S_h(t) = \int _{t_0}^{t_0+T_r+T_s}v_h(t_0)+\int _{t_0}^{t_0+T_r+T_s}a_{h,max}dt\) whereby \(v_h\) is limited by \(|v_h(t)| < v_{h,max}\) is used. If the velocity can’t be monitored the worst case of \(v_h = v_{h,max} = 1.6 \frac{\textrm{m}}{\textrm{s}}\) must be assumed according to ISO/TS 15066:2016.

The required separation distance attributable to the robot reaction time is \(S_r = \int _{t_0}^{t_0+T_r}v_r(t)\) according to [14]. If the robot’s velocity can be monitored, a potential acceleration must be considered according to the technical specification. In contrast, if the trajectory is known and the position and velocity are monitored, the future position and velocity can be determined based on the dynamics calculation of the robot control. If the velocity can’t be monitored the maximum \(v_r = v_{r.max}\) must be assumed.

The required separation distance attributable to the robot stop time is \(S_s = \int _{t_0+T_r}^{t_0+T_r+T_s}v_r(t)\) according to [14]. The stopping distance is measured by the robot manufacturer for each axis and relies on the current velocity, payload and pose. With these distances the robot pose for \(t=t_0+T_r+T_s\) can be determined.

The remaining variables C,\(Z_d\), and \(Z_r\) depend on the chosen sensors and are provided by the sensor manufacturer or must be determined empirically.

4 Simulation

To validate the potential of this work’s model-based SSM approach, a coexistence scenario is simulated which lays the foundation for a comparison to existing approaches, namely a light fence, SafetEye, and the sole monitoring of the robot. The SSM is used for each approach and determines the required separation distance.

4.1 Scenario

Industrial robots occur in various industrial applications. Nevertheless, they are most often used for handling tasks [1]. This is why the benefits of this work’s approach is shown by the example of such a robot handling task. This work contributes to the safe coexistence of heavy-weight industrial robots and humans in highly automated plants. Therefore a scenario is shown where a human walks by an industrial robot, which performs a handling task. A KUKA KR500 R2830 is chosen, which performs a pick and place task. A visualizationFootnote 3 of this simulation scenario is shown in Fig. 2.

Fig. 2.
figure 2

Simulation scenario

The simulation is carried out in a virtual commissioning tool, where the kinematics of the robot and the human are modeled. A virtual CNC is used and integrated into the virtual commissioning tool to generate a realistic trajectory of the robot. The safety calculations of the different approaches are described in the next section.

4.2 Implementation Details

The parameters for the different safety approaches are identified from the manufacture manuals or they are estimated. The values C and \(T_r\) are different for each approach.

  • For the simulation of a light fence a Pilz PSEN_opII4H [19] with a Beckhoff TwinSafe [6] safety PLC and an EtherCAT fieldbus is used. According to the DIN EN 13855 the intrusion distance is calculated to \(C=8(d-14)\) with d in mm whereby in our case \(d=30\text {mm}\) so that \(C=0.128\). The reaction time \(T_r\) is calculated as \(T_r = t_{\text {sensor}}+t_{\text {io}}+t_{\text {bus}}+t_{\text {logic}}+t_{\text {bus}}+t_{\text {io}} = 0.0393 \text {s}\) with the values from [6, 19].

  • The values for the SafetyEye simulation are provided in [18] with \(C = 0.208\) and \(T_r=0.3\).

  • For the sole monitoring of the robot and the model-based SSM \(C = 0.1\) and \(T_r = 0.3\) based on values presented in [21] are assumed.

The position uncertainty of robot and human are estimated to \(Z_d = 0.1\) and \(Z_r=0.001\). The maximum velocity of a human is set to \(v_h = v_{h,max} = 1.6 \frac{\textrm{m}}{\textrm{s}}\) as suggested by [14]. The maximum velocity of the robot \(v_{r,max}\) could be set to the physical limits but depending on the application these values are never reached. Therefore, the maximum velocity occurring during the application satisfy the worst case assumption.

4.3 Evaluation

The resultsFootnote 4 of the simulation are shown in Fig. 3. The minimum occurring separation distance \(d_{min} = min(d_{i,j})\) is the minimum euclidean distance between each link of the robot and the human. If the required separation distance is lower than \(d_{min}\) the use case can be considered as safe. The required separation distance for the light \(S_{lf}\) and the SafetyEye \(S_{se}\) is constant because the worst-case assumption is used for the human and the robot motion. Due to the higher values of the constants C and \(T_r\) for the SafetyEye approach, a higher distance between human and robot is required. Therefore, high sensor accuracy and low reaction time are key elements for realizing small separation distances.

Fig. 3.
figure 3

Comparison of different safety methods

In contrast to the constant values, the sole monitoring of the robot \(S_{r}\) and the monitoring of robot and human \(S_{rh}\) is variable due to the monitoring of the actual motion of the human and robot. Monitoring the robot motion halves the required separation distance compared to the light fence. The reduction through the monitoring of the human is smaller but still the smallest required separation distances because the velocities of the human and robot are lower compared to the worst-case assumptions. Thus, saving space and minimizing safety distances requires monitoring humans and robots.

5 Requirements and Safety Discussion

The main problem with complex safety functions for HRC is the safe perception of the human and the robot [26]. In related work, different sensors are used and combined, but the certification of the concepts is not considered. In other areas like autonomous driving, sensor fusion is a recommended way to perceive complex environments. Therefore an accurate examination of possible sensors and model extraction must be done. The authors propose a redundant but diverse perception with different sensors and algorithms to gain reliable monitoring of human movement. Furthermore, the robot’s motion must be determined safely. A common approach is to use a sensor before and after the drive so that the positional differences of the mechanic can be detected. Most robots have only one sensor at the drive. As shown in the simulation, the processing time is also a key influence in the calculation. The SafetyEye requires a higher separation distance than the light fence due to its higher processing time. Therefore, the sensors and algorithms should have low latencies and deterministic timing behavior.

6 Conclusion and Future Prospects

In this work it was shown how human and robot modeling combined with a dynamic safety calculation based on a SSM can help to safely overcome conservative distance estimations. Based on these results, the complete approach by combining two appropriate sensors for the perception of humans should be developed. The combination of such a human tracking and the presented model-based SSM approach should be integrated into an industrial robotics environment to help reduce space and make production systems more flexible.