Keywords

1 Introduction

Wire harness installation is a very challenging task. Especially in automotive production, where its size and complexity make it one of the heaviest and most expensive components [1]. Therefore, wire harnesses are mostly installed manually, which is a physically demanding task [2]. Compared to wire harness assembly, where several processes have been automated in the last decades, wire harness installation in the car remains to have a low level of automation [3]. However, automation offers the potential to increase process reliability, reduce physical load on workers, and decrease costs in the long term. Therefore, this paper aims in the direction of achieving higher automation in the process of wire harnesses installation in the car chassis, as shown in Fig. 1.

Fig. 1.
figure 1

Installing a wire harness in a car chassis. The proposed localization concept is used to determine grasp poses for the robot.

2 Problem Statement

In research, wire harnesses are referred to as semi-deformable linear objects (SDLOs) [4]. They consist of rigid components such as clips, plugs, wire channels, and fixations interconnected by deformable parts [1]. Typical processing steps of wire harness installation include routing the wires, inserting the plugs into their mating parts, mounting clips, and connecting any wire ends to appropriate terminals [5]. Wire harness perception for the installation process can be split into:

  1. (i)

    Finding the required component for an installation step: The challenge here lies in distinguishing the different components from each other. The components might have similar or identical geometry and may be used multiple times across a wire harness.

  2. (ii)

    Determine the pose of a component: The challenge here is to determine the pose of the component accurately enough to be grasped by a robotic manipulator. The components along the wire harness are very small compared to the workspace in the car chassis. For vision systems, this poses the problem of covering a large field of view with a high resolution.

3 Related Works

Relevant works that address wire harness localization can be divided into two categories. Approaches from the first category, such as [7] and [8], aim at localizing the deformable parts of the wire harness. They use a series of segmentation and estimation methods to distinguish individual wires, even in entangled wire harness assemblies. However, these works consider aircraft wire harnesses and mainly address applications in aeronautics, such as wire defect detection. Therefore, the focus of these approaches lies on visual characteristics such as distances between wires, colors, or bending, while not considering the circumstances given in automotive wire harness installation. One of the most comprehensive works on the use case of automotive wire harness installation is presented in [9]. They address the problem of perception by attaching visual markers to the wire harness and its components. This only yields a partial representation of the wire harness, at locations, where markers are attached and visible to the camera. Hence, there is no information about the mechanical coupling between the detected markers available. This is a limiting factor for certain installation steps.

Approaches from the second category neglect the deformable parts and only aim to localize individual components. In [4], the authors investigate the process of mating different electrical plugs of the wire harness. For localization, they use a global camera for detecting the component in the workspace using a deep learning neuronal network, trained on labeled images of the components. For accurate 6D pose estimation, they use a 2D in-hand camera system with template matching of 2D shape representatives obtained from 3D CAD data of the plugs. The approach is similar to the proposed concept in the sense that it uses two camera systems. However, it can only estimate components, but not the shape and configuration of the wire harness. Another approach from this category is presented by [10]. They use color heuristics to estimate the positions of plugs from 3D RGB-D data using traditional image processing methods. The focus of their method is position estimation and therefore, they do not obtain the 6D pose of the components. However, for certain automotive wire harness installation tasks, the orientation is crucial, especially when subsequent mating requires a specific orientation.

4 Contribution

Despite previous approaches have made great efforts to solve the challenge of localization, none of the investigated methods can fully fulfill the requirements for automotive wire harness installation. Therefore, this paper proposes a two-step localization concept. The concept estimates the current shape of the wire harness first. Hereby, we are capable of finding and distinguishing geometrical identical components and also estimate their mechanical coupling. In a second step, the pose of each component is obtained with high accuracy from a narrow field of view, by state-of-the-art template matching using the components’ known geometric model. The presented concept is evaluated with a case study on clip installation of a reference wire harness with a seven-axis lightweight robot in car chassis.

5 Concept

The proposed concept uses two stereo cameras with different fields of view for the localization, see Fig. 2. The first camera is used to find the shape of the wire harness. Therefore, it is referred to as “Camera for Wire harness Localization” (CWL). The idea is to estimate a narrow region of interest for the second camera, which is referred to as “Camera for Component Localization” (CCL). The CCL is mounted on the robot end-effector to allow flexible repositioning. Eventually, the goal is to obtain an accurate estimation of the position and orientation of individual components, which are illustrated as target points in the figure.

Fig. 2.
figure 2

Schematic of the test bench setup. Depicted is the concept of different fields of view of the CCL and CWL. The specific component of interest for accurate pose estimation is highlighted in purple.

The concept consists of two steps. First, the shape of the wire harness is determined. This allows for rough position estimations of all wire harness components. The estimated positions can then be used to distinguish individual components by comparing the estimated position of the component with their respective position on the wire harness. Note that this also allows estimating mechanical coupling between the components, which determines the available motions during a subsequent manipulation. In the second step, a specific component, chosen from a higher-level assembly logic, can be localized by moving the CCL over the estimated position of the component. This allows a high resolution pose estimation, as the camera operates in a narrow field of view of the component.

5.1 Wire Harness Localization

The CWL creates a point cloud of the entire workspace. The point cloud is preprocessed to segment the wire harness from the environment. Here we use 2D image processing, such as binary filters and background subtraction, together with 3D image processing, such as clustering and box filters for outlier removal.

After preprocessing, the localization is performed by registering a model of the wire harness to the obtained point cloud representation of the wire harness. For this step, we use non-rigid registration methods, such as [11] or [12], to estimate correspondences and match the model to the point cloud. This process is shown in Fig. 3a.

After registering the model to the point cloud, the rough positions of the components can be obtained, as each component is rigidly attached to the wire harness. Therefore, the local position of a component on the wire harness encodes its position uniquely in the workspace.

Fig. 3.
figure 3

Estimation methods for the steps of wire harness and component localization.

5.2 Component Localization

Using the rough positions obtained from the estimated wire harness configuration, the robot can move the CCL, such that its field of view covers the targeted component. The CCL then acquires a high-resolution point cloud of the component. From this point cloud, we can obtain an accurate pose estimation using template matching [13]. Template-matching algorithms are often directly integrated into software libraries of modern stereo vision systems and can be readily used. They yield accurate estimations of the position and orientation given a CAD model of the component. For the system used in this paper, the matching result of a wire harness clip is shown in Fig. 3. From the estimated pose of the component, a suitable grasping pose for the robotic manipulator can be obtained by specifying the gripper geometry and desired grasping points.

6 Evaluation

6.1 Experimental Setup

For the experimental evaluation, a setup according to the concept of Fig. 2 is built. The scenario consists of a robot, two stereo vision camera systems, a car chassis, and an exemplary wire harness. The robot is a KUKA iiwa lightweight robot with 7 \(^\circ \)C of freedom. This allows for high flexibility in the constrained workspace. The CCL is a Roboception rc_visard 65 stereo camera, which was hand-mounted on the robot end-effector. The CWL is a Nerian Scarlet stereo vision system, which is fixed in the workspace on the top of the car chassis. The camera setup is depicted in Fig. 4. A reference harness is used to evaluate the localization concept. Its design was created in consultation with automotive companies associated as partners within the research campus ARENA2036. The wire harness consists of one central trunk of 1 m in length and three additional branches of 0.44 m, 0.36 m, and 0.15 m, and a radius of 0.01 m. The wire harness is fixed in the car chassis with custom-made fastening clips, which are used as reference components in the evaluation. They are modified for easier grasping with a robotic gripper, as depicted in Fig. 4b.

Fig. 4.
figure 4

Camera setup for the experimental evaluation of the localization concept.

6.2 Evaluation of Wire Harness Localization

The evaluation of the wire harness localization aims at investigating if the configuration of the wire harness can be localized by the CWL with sufficient accuracy. Sufficient accuracy is measured by the subsequent localization with the CCL, as the rough clip position is encoded by the wire harness localization of the CWL.

For this purpose, a reference to measure and compare the performance of the localization is defined. This was achieved by fastening the wire harness with four clips in the car chassis. For each of the clips, successful grasping poses are determined. Successful grasping poses are used as a ground truth reference for further evaluation.

Assuming no prior knowledge of the ground truth clip positions, wire harness localization is performed. From the registered model, the positions of the four mounting clips are estimated. The estimated positions are then compared to the ground truth positions. Since it cannot be assumed that the initial configuration of the wire harness is accurately known, the model is initialized in different poses. The initial poses are randomly sampled with a positional uncertainty of \(\pm {10}\,\textrm{cm}\) in the mounting plane and a rotational uncertainty of \(\pm {20}^{\circ }\) around the normal of the plane. The localization was performed for nine different configurations.

Fig. 5.
figure 5

Exemplary tracking results for the evaluation of the wire harness localization with different initialization poses of the model.

Exemplary results are shown in Fig. 5. The model is depicted in blue, the observed point cloud in black, the estimated configuration from non-rigid registration is shown in red, and the positions of the mounting clips are indicated by black boxes for the measured ground truth positions and blue for the model’s estimates. To quantitatively assess the accuracy of wire harness localization, the error between the measured ground truth and the estimate is defined by the euclidean distance. To assess the accuracy of the wire harness localization, the error is averaged for each clip over the conducted experiments. Figure 6 shows the evaluation scenario with the enumeration of the clips depicted in Fig. 6a, and the measured mean position error for each clip given in Fig. 6b. The position error remains below 5 cm for all experiments. This determines the maximum uncertainty which can be used to define a region of interest for the clip localization. CCL should be positioned such, that an area of \(\pm {5}\,\textrm{cm}\) around the estimated clip position is covered, to ensure that the sought clip lies within its field of view.

For all randomly sampled initial configurations of the wire harness the clip error after registration lies below 5 cm, which is a sufficiently accurate estimate for targeting the field of view of the subsequent component localization towards the desired clip. The standard deviation over all experiments remains between \(\pm {0.1}\,\textrm{cm}\) to \(\pm {2}\,\textrm{cm}\). This can be used to define a confidence measure for distinguishing different clips. Considering a radius of the standard deviation around every clip’s estimated position, clips which lie outside the radius can be clearly distinguished and uniquely identified from their estimated positions alone. Within the radius, such a distinction cannot be made, and the clips need to be distinguished differently, for example by unique geometry, color, or texture information. However, in our experiment, all clips could be uniquely identified because they were mounted with a standard deviation greater than 2 cm apart.

Fig. 6.
figure 6

Evaluation of wire harness localization.

6.3 Evaluation of Component Localization

The evaluation of the component localization aims to investigate if the individual components can be localized with sufficient accuracy for grasping. In the conducted case study the attached clips are considered the components and serve as a reference for the evaluation. From the prior wire harness localization, a rough estimate of the position for each clip is available. Using this information, the robot is positioned such that the desired clip lies within the field of view of the CCL. However, many poses can be considered for positioning the CCL. Therefore, component localization is performed from different poses in the workspace.

The experiment is performed for a single wire harness clip, where its ground truth pose is determined, as in the previous experiment, by performing a successful grasp on the clip. The clip is localized by the CCL from seven different perspectives, each time rotating the robot end-effector for several degrees. For each pose, the positional and angular errors are determined from comparison with the ground truth. The positional error is measured by the euclidean distance between the estimated position and the ground truth. The angular error is measured as the norm of the Rodrigues angle between the measured orientation and the ground truth. Figure 7 shows the results.

From the results, it can be seen that the positional error remains below 1 mm and the angular error remains below 2 \(^\circ \) for six of the seven poses. The first pose yields a larger positional and angular error of up to, 1.82 mm and 15 \(^\circ \). From experience, we rate this measurement as an outlier. Therefore, the average positional accuracy is 0.46 mm, and the rotational accuracy is 0.94 \(^\circ \).

From the experiments, it can be observed that the measurement errors are in the range of 1 mm for the positional error and 2 \(^\circ \) for the angular error. This yields successful grasps, since errors within this range generally can be tolerated by the gripper design. However, outliers such as seen for the first pose, can occur and yield a probability to result in an unsuccessful grasping attempt. A practical approach to counteract such measurement errors might be self-centering clip and gripper geometries which provide positional and rotational tolerances during grasping.

Fig. 7.
figure 7

Evaluation of component localization for wire harness clips.

7 Conclusion

This paper presents a concept for wire harness localization. The process is split into two steps. This allows covering a large workspace while maintaining high precision for the pose estimation of individual components.

Within the experimental evaluation of the presented localization concept, clips could be estimated with an average error of 1 cm to 4.3 cm in the car chassis. This was sufficient for positioning the CCL to perform accurate localization with a positional accuracy of 0.46 mm, and a rotational accuracy of 0.94 \(^\circ \) in a narrow field of view. Grasping attempts with a positional accuracy of less than 1 mm and rotational accuracy of less than 2 \(^\circ \) allowed successful grasps in this case study. From the results, it can be expected that the concept can be generalized for the localization of other components such as plugs or wire channels.

Nonetheless, there remain limitations to the proposed method. First, the wire harness localization needs an entire view of the wire harness and a sufficiently good estimate of the initial configuration to perform successful registration. Second, overlapping of wires needs to be avoided because the non-rigid registration could converge in physically implausible local minima, which causes the localization to fail. Third, the components need to be constrained to specific orientations, as the robot cannot re-grasp and re-position the component arbitrarily.

Future research will be necessary to improve the concept. Methods that are capable to localize the wire harness from partial views and recover from false position estimates or failed grasps will make the process more robust and help to get closer to the goal of automated wire harness installation.