Vision Based Trajectory Dynamic Compensation System of Industrial Robot

.


Introduction 1.Background
Flexible manufacturing is a production method designed to easily adapt to changes in the types of products being manufactured.It can easily adjust to variations in the type and quantity of goods produced, compared to traditional manufacturing methods.Consequently, its adoption has been intensifying in recent years, even though flexible manufacturing systems frequently suggest elevated expenses and dependency on adept operators.Many researchers have attempted to categorize and summarize flexible manufacturing [1][2][3], but still no unified standard exists.For example, in [4], flexible manufacturing was divided into two categories based on the request of flexibility: Environmental Uncertainty and Variability of the Products and Processes.Sethi et al. [5] organized flexible manufacturing into eleven categories (e.g., Process Flexibility, Product Flexibility, Routing Flexibility, and others [5] ).Among them, the flexibility of the system's components was divided into three categories: Machine Flexibility, Material Handling Flexibility, and Operation Flexibility.In this study, we focus on the system's machine flexibility (i.e., various operations performed without set-up change).

Prior Art
In recent years, the application of industrial robots in the industrial field has become increasingly widespread [6].For instance, robots are not only used in traditional industrial sectors such as welding, assembly, and material handling, but in recent years, robots have also been used in emerging fields such as 3D printing [7][8][9] and 3D scanning [10].This is primarily due to the variety of industrial robots and their high degree of freedom, allowing them to meet various application scenarios.
Industrial robots are typically programmed using the teach-in method, where a human manipulates the robot manually with a teaching pendant, and the robot memorizes it [11].While the operator's command trajectory may or may not be optimal, when the task performed by the robot is sufficiently specific, the operator can manually adjust its movements to successfully carry out the required task.However, for flexible manufacturing that needs to deal with changes in product types, the teach-in programming and manual modification of the robot's motion are no longer possible.
Addressing the issue of applying industrial robots to flexible manufacturing, many researchers have proposed their own solutions.The main method is to install various sensors on the robot to meet different processing needs.This structure is also known as a macro/micro manipulator [12].
Djelal et al. [13] proposed a system based on vision and force sensors to address the situation where the robot tracks an unknown constraint surface in the presence of uncertainties in the robot's kinematics, dynamics, and camera models.The system's effectiveness was confirmed through simulation.[14] focuses on the application of the vision-force system in robotic machining, providing a more specific application scenario: surface treatment of the workpiece.In this study, the vision sensor is used to provide visual information to the robot operator.A force sensor is included to protect both the workpiece and the robot tool.A similar proposal based on force sensors for implementing flexible manufacturing was also proposed in [15,16].
In [17][18][19], some researchers proposed a dynamic compensation method and applied it to a human-robot system to improve the flexibility of that system.[18] provides a macro/micro manipulator based on visual feedback to meet the demand for robot manufacturing in a zero-waiting time production line.The application of this manipulator was extended to contour followed by humanrobot collaboration in [19].Similar research on a vision-feedback-based macro/micro manipulator is also presented in [20].

Research Gap
While prior research has made significant advancements in flexible manufacturing systems, especially concerning industrial robots and macro/micro manipulator structures, several gaps persist.
While some researchers in [17][18][19] have proposed dynamic compensation methods, there is limited work on a system that can seamlessly integrate vision and execution modules for broader applications.Many of these systems are either too complex for practical application or lack the necessary robustness to handle unforeseen manufacturing challenges.
Furthermore, the issue of camera occlusion by the end effector, although acknowledged, hasn't been solved in the current research, especially in scenarios where precise trajectory tracking is critical.
The current research lacks a comprehensive solution that can dynamically compensate for such occlusions while maintaining manufacturing accuracy and having a simple and easy-to-set-up structure.

Purpose and Contribution
Building upon the aforementioned research background, the aim of this study is to establish a more efficient and robust system for flexible manufacturing.This study presents a system designed to dynamically adjust the position of a robot's end effector, accomplished by installing an additional vision module and execution module on the robot tool flange.To evaluate the system's effectiveness, it was applied in an early application scenario in flexible manufacturing, specifically for compensating the contour tracking of an unknown target.
Moreover, this system makes real-time compensation of target trajectory possible when an end effector is installed, using customized fixtures.
Furthermore, considering the occlusion of the camera by the end effector, the study proposes a dynamic compensation algorithm based on the camera and XY-stage.This algorithm sets Regions of Interest (ROIs) on both sides of the end effector, identifies the error between the trajectory contour center and the end effector, and separately controls the X and Y axes, achieving dynamic compensation for the end effector.

System Construction
Based on the early application scenario in flexible manufacturing, a dynamic compensation system consisting of XY-stage and two industrial cameras is proposed in this study, and the system structure is shown in Fig. 1.The hardware components of this system can be divided into the following three parts according to their functions:  The two dimensions of the XY-stage are controlled by the data returned by the industrial camera which is rigidly mounted on the linear stage in the corresponding direction.The vision module and execution module are connected with a custom fixture (see Fig. 2), which is adjustable to modify the camera's posture to position the robot's end-effector at the center of the camera's field of view.Compared to the solution in [19], the cameras in the proposed system do not occupy the space of the end effector, making it possible to install conventional machining tools on the robot, which means that this system is more flexible than the previous system.

Vision Module
The vision module detects the trajectory contour and relays its state to the execution module.This module consists of two cameras, each oriented in different directions, as shown in Fig. 2. The camera installed on the linear stage in the vision module is named CAM n , (n = 1, 2).In the X-direction, n = 1; in the Y-direction, n = 2.

Identify Compensation Center
To identify the compensation center, this study proposes a high-speed trajectory contour recognition scheme.This scheme improves the image processing speed by setting ROIs (Regions of Interest) in the camera's imaging plane, ensuring that the recognition process remains unaffected by Fig. 2: Structural details of the execution module and the vision module the robot's end effector.P n,0 is the imaging position of the end effector on the camera imaging plane as shown in Fig. 3 by a red cross.P n,0 is also the coordinate origin (0,0) of the imaging plane for the coordinate system shown by black arrows (Unlike the conventional image coordinate system with the origin at the upper left corner).Two ROIs, ROI n,1 and ROI n,2 , were set within each camera's imaging area.ROI n,i are the blue box area of symmetry about P n,0 .To make the ROI n,i coverage omnidirectional, ROI n,i 's width, W n,i , height, H n,i , and the distance, D n,i , from P n,c , need to satisfy Eq. ( 1).If the ROIs do not meet the requirements of Eq. ( 1), it will result in blind spots when the vision module detects trajectory contour.
Trajectory contours within ROI n,i are identified by inverting the original image and setting appropriate thresholds for grayscale values.The zeroth order moment, M om0 n,i ∈ R 2 , and centroid, P n,i ∈ R 2 , of trajectory contours in ROI n,i are calculated by Eqs. ( 2) and (3).
Fig. 3: ROIs setting Here, (x n , y n ) refers to the row and column index, and I(x n , y n ) refers to the intensity at that location (x n , y n ) in CAM n .
To investigate the effectiveness of the proposed system in a simpler way, target compensation center P n,c ∈ R is calculated as the midpoint of P n,1 and P n,2 as shown in Eq. ( 4).

Determine Trajectory State
For the XY-stage, different trajectory angles correspond to different operating states.In the event that the trajectory runs parallel to the X-axis, there's no need for compensation on the linear stage in the X-direction.Conversely, if the trajectory is tilted at a 45-degree angle relative to the X-axis, both axes of the XY-stage must be involved in the compensation process.Notably, with a constantly shifting trajectory, deciding which axes should participate in the compensation emerges as a complex challenge.This study proposes an algorithm for the vision module to determine the contour state, State n , and outputs it to the controller of the execution module.The torque of the execution module is

Execution Module
The execution module is comprised of an XYstage and two motors to drive the XY-stage.The proposed execution module is controlled by the feedback information, P n,c and State n , from the vision module.As previously demonstrated, since the cameras in the vision module are rigidly mounted to the linear stages on the XY-stage, any movement of the XY-stage induces a shift in the cameras' coordinate system, resulting in feedback.When State n = 1(n = 1, 2), the execution module was controlled by a Proportional-Integral (PI) control method.The PI control law is as follows: where K p and K i are coefficients for proportional and integral terms, respectively.τ n ∈ R is the control reference under PI control, used to provide torque for the execution module.
The execution module has the characteristics of a small working space and high absolute positioning accuracy, while the industrial robot has a low teaching accuracy.To compensate for errors from the robot in all directions, the execution module should maintain an ideal position that allows movement in all directions.This means the XY-stage should avoid reaching actuator saturation, ensuring unrestricted motion in any direction.Resolving the conflict between execution module accuracy and actuator saturation becomes a specific problem.Based on the fact that the target position is not unique in the target trajectory tracking task, under the premise of State 1 ∥State 2 = 1, a balancing coefficient C n,f is introduced into the system.C n,f is used to adjust the weight of two stages in the tracking process to avoid actuator saturation.When the position of the linear stage exceeds the boundary B w , the weight of this linear stage in the system is reduced.The values of balancing coefficient C n,f are shown in Eq. ( 6).
qn is the command stage position value for the X-linear stage (when n = 1) and Y-linear stage (when n = 2).The C is a constant between 1 and 0. The control reference τ n ′ after the introduction of C n,f is shown in Eq. ( 7) When State n = 0, it means that there are stages that are not involved in the compensation at this time, and P n,c is manually set to 0 to mask any potential interference caused by the vision module.To increase the robustness of the system, when State n = 0 and this state lasts for 0.5 seconds, the stage will be moved toward the home position using Proportional (P) control.The control reference in this situation is τ 0 n as shown in Eq. ( 8).
The coefficient V is a constant greater than 0. q n represents the current position value of the stage obtained through the controller.B 0 is the Boundary value.When |q n | < B 0 , the stage will be considered close to the home position, and the stage will remain at the current position.

Summary
As the input of the entire system, the information from two camera views is transmitted to an image processing PC for further processing.The P n,c obtained through image processing and the State n computed by Algorithm 1 is outputted by the image processing PC to the controller of the execution module.The controller utilizes the input of P n,c and State n to determine the control reference value and drive the execution module motion accordingly.As stated in Section 2.1, the vision module and execution module are connected by a custom fixture, such that the compensation effect on the robot end-effector resulting from the execution module motion is instantaneously fed back to the camera's field of view, thereby completing the feedback process.

Experimental Verification of the System's Feasibility
In this study, an UR3e robotic system, manufactured by Universal Robots Co., Ltd., served as the industrial robotic module, offering an extensive working area.Figure 2 shows the details of the execution module and the vision module.In Fig. 2, a customized XY-stage is derived by motors, featuring a 1,000 Hz frequency response and 20mm total stroke for each axis, functioning as an execution module.The specifications for the motors of the execution module were estimated by an accelerometer and are shown in Table 1.This XY-stage is made of aluminum for mass optimization and dynamics.The vision module utilizes two optical inspection cameras MQ013MG-ON produced by XIMEA Corp., capable of operating in grayscale mode within the ROI, achieving 500 FPS with a resolution of 640 × 512.Cameras were positioned to the X-direction and Y-direction work planes, respectively.Figure 4 shows the Experiment setup.
A customized pen, serving as a manipulation instrument, is fixed to the XY-stage's work plane.By adjusting the fixture, the cameras' field of view centers have been made to coincide with the pen tip.
As shown in Fig. 4, an irregular trajectory with a width of 2 mm is printed on paper.This paper is situated on the robot's work plane.In Fig. 5, the black contour represents the target trajectory, while the blue contour denotes the teaching trajectory.Considering the total stroke of the XY-stage, the maximum deviation between the teaching and target trajectories remains under 10 mm (i.e., the target trajectory is constrained within the annular region enclosed by dotted lines shown in Fig. 5).

Experimental Schemes for Verifying the Compensation System Effect
In order to verify the compensation effect of the compensation system, experiment schemes with two steps are proposed.To enable the robot end effector to move along the teaching trajectory, a straightforward teaching method was used in the experimental schemes.In the robot base coordinate system, the robot was controlled to perform a circular motion in the XY-plane with a radius of 100mm centered at (-150mm, -330mm) while maintaining a height of -30mm.In the proposed schemes, the robot starts at the position shown in Fig. 4 and follows the teaching trajectory shown in  Fig. 6: Image processing result Fig. 5 as the command value.The robot will execute a clockwise movement in the XY-plane until it returns to its starting position and comes to a stop.This process will be referred to as a "cycle" in the following sections.The proposed schemes are as follows: 1. Without compensation, the robot completes one cycle at a speed of 40 mm/s.2. With compensation, the robot completes one cycle at a speed of 40 mm/s.
Steps 1 and 2 are intended to indicate the compensation effect of the system.

System Compensation Effect Evaluation
To identify the compensation center, the size, and location of ROIs were adjusted based on the shape and volume of the customized pen.The parameters defining the ROIs are shown in Table 2 (n = 1, 2, i = 1, 2).The parameters defining the control reference are shown in Table 3.
Image processing results calculated by Algorithm 1 at the moment of 14.1 seconds after the system startup are shown in Fig. 6. Figure 6a shows the image in the X-direction camera field of view.Three green circles ("⃝") in the field of view, from left to right, correspond to the positions of P 1,1 , P 1,c and P 1,2 after image processing.Figure 6b shows an image in the Y-direction camera field of view.Similarly, the three green circles from left to right in the field of view corresponding to the positions P 2,1 , P 2,c and P 2,2 after image processing.
Figure 7 shows the continuous images during one cycle of trajectory contour tracking.In the snapshot of Fig. 7, two linear trajectory contour regions (7b, 7d) where only a single camera is involved in compensation are selected, as well as four trajectory contour turning regions (7a, 7c, 7e, 7f) where both cameras are simultaneously involved in compensation.The tracking errors during the system moving in these four regions are also represented in the cyan-colored area of Fig. 10.In one cycle, the motion of the execution module is depicted in Fig. 8.The red line represents the motion values of the X-direction stage, while the blue line represents the motion values of the Y-direction stage.Figure 8 also illustrates how the control reference controls the stages under certain conditions.
When Algorithm 1 determines that the stage does not require compensation (State n = 0), the stage will move toward the home position until the distance between the current position and the home position is less than 0.5mm.This process is shown within the dashed ellipse in Fig. 8.
As the stage's motion values approach the exceed the boundary value, the torque applied to the stage will decrease (refer to Eq. ( 7)), resulting in a reduction in motion speed, as shown in the Partial view of Fig. 8.
Based on the results of image processing, the tracking errors of the system are shown in Fig. 9, where the green (". ..") and orange (". ..") dot lines respectively represent the uncompensated Xdirection center P 1,c and Y-direction center P 2,c .In Fig. 9, P n,c suddenly jump to 0 at certain moments.This occurs because the trajectory contour does not meet the constraint conditions of Algorithm 1.In such cases, the system assigns the value 0 to P n,c .At this point, the system cannot perform compensation activities.The red ("−") and blue ("−") lines represent P 1,c and P 2,c during compensation.Compared to the values in the uncompensated state, the error is significantly reduced, indicating the effectiveness of the compensation in the proposed system.
To assess the compensation effect of the proposed system more intuitively, Fig. 10 shows the absolute distance between the center of the trajectory contour and P n,0 before and after compensation, denoted as P c ∈ R. P c is calculated using Eq. 9.In Fig. 10, the red (". ..") dotted line represents the variation of P c before compensation.Before compensation, the maximum error recognizable by the vision module is 121 pixels, and there is a significant discontinuous region, indicating that the trajectory contour does not meet the constraint conditions of the Algorithm 1, such as extending beyond the ROIs.In contrast, the blue line represents the variation of P c after compensation.Fig. 7: Snapshots of trajectory contour tracking process In Fig. 10, there are four regions (cyan rectangular areas) where the value of P c changes rapidly.These regions correspond to the points where the robot undergoes trajectory contour inflection.Before this moment, the execution module transitions from being compensated by only one linear stage.In these regions, both axes of the execution module are involved in the compensation simultaneously.It is evident that compared to the regions where only a single linear stage is involved in the compensation, the compensation control is more complex in the cyan areas.However, these regions do not exhibit higher tracking errors.
The absolute tracking errors (µ |x| ,µ |y| ) and average tracking errors (µ x ,µ y ) in the X-and Ydirections for the Fig. 9 are presented in Table 4. Furthermore, the maximum value and average value of P c are also presented in Table 4.The maximum value of P c after compensation is 22.8 pixels (1.34mm), with an average absolute value of 5.4 pixels (0.32mm).The values in Table .4 further validates the effectiveness of this proposed system.Figure 12 shows the travel values of the stage in a cycle for Steps 1 and 2. In Fig. 12, the solid line represents the stage's travel when C n,f is set.The dotted line represents the stage's travel without setting C n,f .The two datasets show a significant difference in the light yellow area.With C n,f set, the travel in the X-direction of the stage within the light yellow area is controlled to be within 6.8mm.In contrast, when C n,f was undefined, the stage reached its maximum travel in X-direction.
Figure 13 shows the tracking errors in the two steps.Comparative experiment 1 elucidates the significance of C n,f in managing the stage's travel.The data suggests that through the appropriate setting of C n,f , one can achieve more precise control over the stage's travel without augmenting the tracking error, as evidenced in Table 5.This observation is further corroborated by Figs. 12 and 13.The implementation of C n,f ensures the XY-stage does not approach its travel limits.Consequently, the execution module remains poised to counteract errors from any direction, bolstering the overall robustness of the system.

Comparative Experiment 2: Effect of Contour Recognition on Compensation
In section 2.2.1, to address the issue of the end effector obscuring the trajectory contour, a scheme was proposed to recognize the areas on both sides of the end effector.The compensation results based on this scheme are shown in Fig. 9.As can be seen from Fig. 9, there are still errors in the compensation process of this system.To investigate whether the recognition scheme caused the errors, comparative experiment 2 was introduced.
In comparative experiment 2, the robot's end effector (pen) was removed.By adjusting the parameters of the defining control reference to the values shown in Table 6, ROI n,1 and ROI n,2 were made to coincide.From Fig. 3, it can be inferred that when ROI n,1 and ROI n,2 overlap, the three points P n,1 , P n,2 , and P n,c will satisfy Eq. (10).That is, the Y-direction values of P n,1 and P n,2 will be equal to P n,c , and P n,c will be directly located on the contour of the target trajectory.This eliminates the potential impact that the separated ROIs might have on the system's tracking error when P n,c does not fully lie on the contour.The adjusted ROIs are shown in Fig. 14.
After adjusting the ROIs, the system undergoes one cycle in the same scheme as Step 2 in     attached cameras, accelerometers, and other sensors to industrial robots to enable them to respond to uncertainties during the manufacturing process.This study introduced a system designed to dynamically compensate for the position of a robot's end effector by adding an extra vision module and execution module to the robot's hand.To evaluate the system, it was used in an early application scenario of flexible manufacturing, specifically compensating for the contour tracing of an unknown target.This system achieved realtime compensation for the target trajectory, even when the end effector obstructed the trajectory.The maximum error during the entire compensation process was 22.8 pixels (1.34mm), with an average absolute error of 5.4 pixels (0.32mm).Compared to the maximum error of 8.45mm between the target trajectory contour and the teaching trajectory contour, the error has been reduced by 84%.
Based on the mechanical structure of the compensation system utilized in this study, the system is limited to compensation within a twodimensional plane.To expand the system's application range, future research would discuss implementation strategies for tracking the contours of targets in three-dimensional space.
Moreover, the compensation range of the system is confined to a square region with a total stroke of 20 mm in the two-dimensional plane, and any offsets beyond this region cannot be compensated.In terms of the visual component, the external parameters of the camera were not calibrated.Although the experimental results indicate that even without considering the angle between the camera imaging plane and the trajectory plane, it is still possible to compensate for the robot's end-effector offset based on the small size of the recognition area, enhancing the accuracy of the dynamic compensation requires further calibration of the camera.

Declarations
Funding: The research leading to these results did not receive any funding from agencies in the public, commercial, or not-for-profit sectors.Competing interests: The authors declare no competing interests.Availability of data and material: The data and material that support the findings of this study are available on request.Code availability: Not applicable.

( a )
Image in the X-direction camera (b) Image in the Y-direction camera

Fig. 8 :Fig. 9 :
Fig. 8: motion of the execution module at an endpoint speed of 40 mm/s

Fig. 10 :
Fig. 10: Compare the absolute distance in image before and after compensation system activation at an endpoint speed of 40 mm/s

Fig. 11 :Fig. 12 :
Fig. 11: Setup of the comparative experiments: The target trajectory was rotated by -33 degrees Figure12shows the travel values of the stage in a cycle for Steps 1 and 2. In Fig.12, the solid line represents the stage's travel when C n,f is set.The dotted line represents the stage's travel without setting C n,f .The two datasets show a significant difference in the light yellow area.With C n,f set, the travel in the X-direction of the stage within the light yellow area is controlled to be within 6.8mm.In contrast, when C n,f was undefined, the stage reached its maximum travel in X-direction.Figure13shows the tracking errors in the two steps.Figure.13arepresents the error in the image when C n,f is set, and Fig.13brepresents the error in the image when C n,f is undefined.Although (a) System tracking error under C n,f =5mm (b) System tracking error under C n,f was Undefined

Fig. 13 :
Fig. 13: The impact of C n,f on the system tracking error

Fig. 15 :
Fig. 15: System tracking error when the two ROIs overlap Algorithm 1 Determine trajectory state Require: M om0 n,i ,i,n,ThershMin,ThershMax Ensure: State 1 ,State 2 function State Determine(M om0 n,i ,i,n,ThershMin,ThershMax) for n ← 1 to 2 do for i ← 1 to 2 do if M om0 n,i >ThershMin && M om0 n,i <ThershMax then result n,i ← 1 else result n,i ← 0 end if end for State n ← result n,1 && result n,2 end for return State 1 ,State 2 end function influenced by the value of State n (see section 2.3).The algorithm acting in the vision module is as follows.1. Set the upper and lower thresholds, Thersh-Max and ThershMin for the trajectory area in ROI n,i .2. Compare whether the area in the ROI n,i is within the threshold, return result n,i = 1 if it is fully within the range.In other cases, return result n,i = 0. 3. When the areas within ROI n,1 and && result n,2 = 1, return State n = 1 and stage involved in the compensation.In other cases, return State n = 0.

Table 1 :
Specifications of the motors of the execution module

Table 2 :
Parameters for defining control reference

Table 3 :
Parameters for defining ROIs size and location

Table 4 :
Tracking errors before and after compensation system activation

Table 5 :
Impact of C n,f on tracking errors

Table 6 :
Parameters for defining control reference in comparative experiment 2

Table 7 .
The errors in Table 7 also support the above conclusion.

Table 7 :
Tracking errors under different ROIs settings