Two-Vehicle Coordination System for Omnidirectional Transportation Based on Image Processing and Deviation Prediction

Omnidirectional mobile platform is essential due to its excellent mobility and versatility. With the development of the manufacturing industry, how to transport oversized or overweight goods has become a new problem. Compared with manufacturing omnidirectional mobile platforms with different specifications, it is more cost-effective and flexible to coordinate two non-physically connected omnidirectional platforms to transport overweight and oversized cargo. The roughness of the actual deployment environment and the mechanical deflection between the two vehicles have a significant impact on the normal operation of the system. This paper combines mechanical wheels, image processing algorithms and collaboration algorithms to create a novel and practical split-type omnidirectional mobile platform based on image deviation prediction for transporting oversized or overweighted goods. The proposed system collects raw measurements from a distance sensor and an image sensor, transmits them to a central processing unit through a wireless communication module and calculates and predicts the relative deflection between the two vehicles based on our derived mathematical model. This information is then fed to a Kalman filter and PID control algorithm to coordinate the two vehicles. The effectiveness and performance of our system have been thoroughly tested, which has already been applied in a bullet train production line.


Introduction
The omnidirectional mobile platform is widely used in industrial, military, aerospace and other fields of transportation due to its high flexibility and versatility (Rojas and Föorster 2005). In the field of manufacturing, transportation platforms are widely used in all aspects of producing products. In comparison with normal wheeled vehicles, although omnidirectional wheel movements are more complicated, they are able to travel through very narrow and complicated spaces and their movements are not easily constrained by the terrain surface or obstacles and are thus better choices in many industrial applications (Yamada et al. 2017;Hacene and Mendil 2019;Qian et al. 2017).
It was common practice for tracked transport platforms to transport heavy loads. It had good load-carrying capacity but was limited by flexibility. For example, D A Carnegie and others have invented an outdoor track transport platform with excellent load capacity. There have been a number of projects aiming to develop omnidirectional mobile platforms. In military areas, National Aeronautics and Space Administration (NASA) invented OmniBot, which has been used to perform remote inspections, site surveys and operations in hazardous areas (Houshangi and Lippitt 1999). Omnix Technology Systems, Inc., built an omnidirectional vehicle for the US Navy to inspect the areas that are unreachable to humans and normal vehicles as well as transport heavy goods in military environments. In the field of civil manufacturing, US AirTrax Inc. produced the Universal Walking Forklift, which excels in the applications requiring tight maneuver or transporting long goods through standard-sized doors or narrow aisles (Florentina and Ioan 2011). Schulze developed an automated guided vehicle, which was optimized for the transportation of small payloads and has been used in civilian products, such as wheelchair and shopping carts (Schulze et al. 2010). Guo developed a new motorized robotic bed mover with omnidirectional mobility; this device is more ergonomic, more effective and safer than the conventional powered bed (Guo et al. 2018). In the field of education, Uranus was the first mobile robot with Mecanum wheels, designed by Carnegie Melon University and used to support research on the navigation of indoor robots (Muir and Neuman 1987). Hence, omnidirectional mobile platforms play an essential role in many applications.
Although the existing omnidirectional mobile platform shows excellent mobility, with the development of industry and manufacturing, the cargo is also larger and heavier. It is difficult for an omnidirectional mobile platform to transport such oversize and overweight cargo. Therefore, in many industrial applications, dual omnidirectional mobile platform transportation is a better choice. However, in the traditional two-car system, people often need to control the two delivery platforms at the same time. Manual operation will cause problems of low accuracy and poor synchronization. The cable-connected two-car system is limited by the transportation distance and flexibility (Morishita et al. 2018). This limitation can be overcome by coordinating two nonphysically connected vehicles to create an omnidirectional system based on real-time synchronization and precise control.
The use of vision to estimate distance and angle has been is widely researched in recent years. A location estimation system includes a plurality of Kalman filters, a UWB position system, a pressure sensor, a temperature sensor and a MEMs chip that provides gyroscope, accelerometer and magnetometer information (Hamilton 2018). Lucas and others solved the problem of estimating the location and size of wildfires through visual cameras, installed in the frame of a semi-autonomous reconnaissance and data analysis quadcopter (de Paula et al. 2019). Over the last decade, harnessing the commercial potential of smart mobile devices in indoor environments has spurred interest in indoor mapping and navigation (Sithole and Zlatanova 2016). Therefore, we use the visual camera method to calculate the deviation to achieve the problem of double-car following. This paper proposes and implements an omnidirectional dual-vehicle coordination system (Fig. 1). The overall system consists of two omnidirectional platforms supported by Mecanum wheels, a remote controller for coordinating movements of the two vehicles, an automatic correction module and a wireless communication module based on ZigBee protocol. In a normal industry environment, ground friction and non-identical mechanic systems can cause speed and direction deflections between the two vehicles (Pasterkamp and Pacejka 1997). Therefore, we propose an automatic correction system based on image deviation prediction correction, Fig. 1 The dual-vehicle platform, where each vehicle moves independently and is coordinated by the proposed control system which uses cameras and sensors to measure distance and alignment, and calculates displacement and deflection angles through computer vision algorithms. If deflection occurs, the PID controller will adjust the two vehicles' positions and movements rapidly. This design solves the challenge to transport oversized and overweighted objects freely in a normal industry environment. The developed system has been tested and applied in a bullet train production line shown in Fig. 2 (Zhou et al. 2015). Compared with published conference articles, this paper adds a detailed system structure, improves the detection algorithm, introduces a Kalman filter and supplements the test data.
The rest of this paper is organized as follows: Sect. 1 introduces the overall system design. Section 2 describes the implementation of the system and its key components. Section 3 provides experiment results to validate the system. Section 4 concludes this paper.

Overall System Design
Figures 3 and 4 illustrate the overall system architecture and the vehicle platform. The dual-vehicle coordination system includes a remote controller module, a core controller module, a vehicle body module and an automatic correction module. The core controller and remote controller are both implemented using STM32F4 as the core processing unit and communicate through the wireless communication link Zig-Bee protocol. The remote controller controls the movement of the vehicle, e.g., start, stop and moving in any direction, through a keyboard or a joystick. The automatic correction module integrates a referencing target image on the front vehicle for deflection estimation, a laser distance meter and an image acquisition and processing system. The automatic correction module employs TMS320DM642 as core processor for processing the target image and sending the data to the controller. The core controller collects data from the remote module and signal processor and then runs the Kalman filter and PID controller to eliminate the deflections of speed and direction through adjusting the rotation speed of motors (Mamun et al. 2018;Oo et al. 2019;Wu et al. 2017).

Omnidirectional Mobile Platform Creation
There are three choices (see Fig. 5) available for implementing omnidirectional platforms: swerve, holonomic and Mecanum drive (Tatsuro et al. 2018;Park et al. 2010;Shabalina et al. 2018). Swerve drive is a 4-wheel drive, where each wheel has independently steered driving modules. It has a simple structure but is difficult to program and control. The holonomic drive consists of four omni-wheels and is easy to control but has low traction capacity. Mecanum drive consists of three or four Mecanum wheels and has been widely used in omni-directional platforms because it is simple to build, program and control, e.g., smart robots and wheelchairs (Williams et al. 2002;Salih et al. 2006). This project also employs Mecanum drive with four Mecanum wheels ( Fig. 6) (Lin and Shih 2013) to develop a platform that can move freely and rotate coordinately around its central point simultaneously-even in small and narrow spaces. The platform has a rectangular layout, with two wheels on each side of the body; see Fig. 7. The radius of a wheel is R, and L denotes half length of the vehicle body. We introduce a global coordinate system with its origin at the central point O of the mobile platform (Shi et al. 2007). The speed of the center point and the revolution velocity of the wheel are defined as [v x , v y , ω z ] T and ω i , i = 1, . . . , 4, respectively. Each wheel is driven by a separate direct current (DC) servomotor and combination of the four-wheel move- Fig. 7 The omnidirectional mobile platform sketch ment provides three degrees-of-freedom (Liu and Wu 2011). The relationship of speeds between the vehicle and wheels is below, so that the vehicle body can move in any direction by adjusting where K is the inverse kinematics Jacobian matrix. It represents the mapping relationship between the joint velocity and speed of the wheels.

Analysis of System Workflow
This section introduces the system's software process workflow and real-time scheduling process of the system. The system is developed under the environment of Keil MDK 5 and CCS 5.5 The processing flow of the system is shown in Fig. 8. When the system is initialized, the initial position is calibrated first, and the position information is acquired through the vision system of the following vehicle, and the alignment is continuously adjusted to achieve the ideal initial state. So far, the initialization of the system is completed, and the system control and real-time tracking of the two vehicles will be started. After reaching the ideal initial state, the two-car system can wait for the controlled instructions to transfer and transport goods. The front and rear vehicles are in the same system, but the controller only controls the vehicle in front at this stage, and the vehicle behind is autonomously following. The front car moves under the control of the controller, and the following car is continuously adjusted through the automatic calibration module. The calibration process is the same as the calibration in the initial state. Through the vision system, the target picture is continuously obtained, the deviation information is calculated, the motor is driven to move, and the system is continuously adjusted to follow the system. It should be noted that if the system has a rigid cargo load, the automatic calibration of the following vehicle is easier to achieve due to the effect of friction. In this case, due to the slow movement speed of the system, the computing speed of the algorithm is much higher than the running speed of the vehicle and can realize real-time automatic adjustment.

Kinematics Modeling Analysis
We refer to the article He (2018) to model the omnidirectional mobile system and ideally establish a coordinate system as shown in Fig. 9 We establish the motion analysis coordinate system shown in Fig. 9. The origin of the OXY coordinate system is fixedly connected to the motion center of the platform, and the X and Y axes are parallel to the work plane. At the same time, we define the Mecanum wheel coordinate system 1, 2, 3, 4). The X and Y axes are parallel Fig. 9 Coordinate system of the platform to the ground coordinate system and are consistent with the direction of the vehicle body coordinate system. 2X represents the center distance of the platform wheel in the long direction, 2Y represents the center distance of the wheel in the width direction, and the rolling angular velocity of the four Mecanum wheels is, respectively, recorded as ω i (i = 1, 2, 3, 4). V X represents the translation speed of the X axis, V Y represents the translation speed of the Y axis, ω o represents the angular velocity of the platform spinning, and α is the angle between the axis of the small roller and the axis of the wheel. According to the referenced article He (2018), we get the following kinematic modeling results in Eq. (2).

Principle of Deflection Correction and Coordination
The dual-vehicle system is designed to transport oversized and/or overweighted goods by coordinating two omnidirectional vehicles to maintain a stable and constant distance of D 0 and carry out coordinated rotation which ensures the relative positions of the two vehicles remaining constant. In practice, the two vehicles may have deflections caused by their relative motions and uneven ground conditions. We denote the transverse offset as Δ x and the longitudinal offset as Δ y . These deflections are corrected by the PID controller introduced in the next subsection. Figure 10 introduces the imaging system that is essential to deflection correction. The rectangle target image is attached to the rear of the front vehicle, which is captured by the camera mounted on the rear vehicle. The actual width and height of the target image are 225 mm and 158 mm, and W and H denote the width and height, respectively. D l , D r and D t denote the distance from the left, right and top edge of target to the camera; h l , h r and w t denote the length of the left, right and top edge of imaging plane in CCD, measured in mm, respectively. The following relationships can be established: where F is the focal length. Through the laser distance meter, we obtain the measured distance D 1 between two vehicles. The image acquisition and processing module obtains the following values: the number of pixels on the left, right and top side of the target M L , M R and M T (the number of pixels at the top and bottom should be the same because of no significant elevation deviation), and the dislocated pixels M C from the central point. The imaging system measures the number of pixels at each edge of the target picture and stores them in the system as reference (Fig. 11). We draw the relevant relationship between the two vehicles in Fig. 12. In order to facilitate the mathematical analysis and modeling, the center of the rear vehicle's front panel is considered as the original point to build a new coordinate system x oy . The deflecting angle is α, the lateral error is Δx , and the longitudinal error is Δy .

Vehicle Body Deflection Measurement
This subsection introduces the algorithms for calculating the three parameters: α, Δ x and Δ y , which are sent to the control module to adjust against deflection. The focal distance for- Fig. 11 The captured reference image example under three kinds of deviation conditions Fig. 12 Deflections between the two vehicles mula could not be directly applied because of the deflection angle between the two vehicles. The equivalent length of the top side of the target picture is L x , and the distance between the target and lens is D x . According to imaging properties and the law of cosines, length L x can be expressed as: In (4), we can replace W and D t with L x and D x , respectively. Then, we get the focal length F as follows, where α is the shifting angle of the rear vehicle and H is the length of the top edge of the target. Similarly, the focal length F can also be derived by the length of the vertical edges: Because the number of pixels is proportional to the length of the image, the following relationship can be obtained based on (7): We can combine (6) and (8) to eliminate D x and find the relationship between the number of pixels and the deflection angle α as follows: This paper uses an image sensor with a 1/4 inch semiconductor charge-coupled device (CCD) as an example, which has a physical size of 3.2 mm×2.4 mm and a pixel resolution of 720 × 576. It is easy to obtain the following equation: Combination of the above two equations leads to a closedform solution for calculating the deflection angle: where the parameters M L , M R and M T can be obtained directly from the image processing module. The lateral offset Δx can be obtained as follows. We denote the horizontal offset from the center of the target as Δb (as shown in Fig. 12), and D 1 is distance between the two vehicles measured by the laser distance meter. Based on the relationship between pixels and distance, we get the following equation.
With the help of the auxiliary lines in Fig. 12, we can obtain: where the parameter L denotes the half length of the vehicle body (as shown in Fig. 7).
The third parameter, Δy, can be calculated as follows:

Kalman Filter and PID Controller
PID is an algorithm widely used in industrial control (Freire et al. 2018;Chopade et al. 2016). Given the mathematical model of the relationship between pixels and offset, the exact deflection can be detected and accurately calculated by the controller. If it is greater than the threshold value, the core controller will coordinate the motors on the Mecanum vehicle to correct the deflection. The three variables [v x , v y , ω z ] are the key control parameters. In order to response rapidly and accurately, we use Kalman filter and PID algorithm (Han et al. 2009;Skoczowski et al. 2005) to adjust the vehicle body according to the obtained deflection values: [Δx, Δy, α].
Kalman filters have received much attention in robot localization, tracking, motion control, estimation and prediction (Chen 2012). In this paper, Kalman filter is employed to predict the deflection of the rear vehicle.
We can model the problem in this project using Kalman filter as follows: whereX k−1 is the optimal estimate of deviation in the last step,X k|k−1 is the prediction at the future step k. P k|k−1 and P k−1 are the covariance of X k|k+1 and X k−1|k+1 , respectively, and Q is the process noise covariance. During the measurement update, the Kalman gain K gk is computed by the following equations: where R is the covariance of the observation noise.
Starting from an initial estimate of the covariance matrix(Q: 0.05; R: 0.1) and using the measured deviation and the predicted value, an a posteriori state estimate (the optimal current state estimate) is generated by incorporating the measurement as in (17). The error covariance estimate is updated by (18). PID controller has been commonly used in industrial control systems. The controller aims to decrease the difference between a desired value and a measured value (Wen and Tong 2017). The posterior estate X k|k is used to feed the PID controller, which can be updated in a real-time recursive fashion. The PID expression of the continuous position type is as follows Eq. (19): In the proposed system, we use incremental PID for discretization as shown in Eq. (20): In our modeled discrete system, after a series of simplifications, the control variable is determined by the following Eq. (21): where u(k) denotes the output motor speed at time k, K p is the proportional coefficient, K i is the integral coefficient, K d is the differential coefficient and e(k) is the deviation between the front and rear vehicle at time k.
Of particular importance is to obtain the accurate values of K p , K i and K d , which are essential to the controller. After numerous experiments, we obtained a set of those parameter values (K p : 16; K i : 0.3; K d : 10) which ensure the automatic coordination system to correct itself precisely and quickly.

Experiment Results and Discussion
This section evaluates the designed dual-vehicle coordination system through practical experiments in a real world environment. This project uses an image sensor with a 1/4 CCD made by SONY, which has a physical size of 3.2 × 2.4 mm and the pixel resolution 720 * 576. The length and height of the target image mounted on the front vehicle are 225 mm and 158 mm, respectively. In the experiments, the camera is fixed to 4× zoom which equals 12.8 mm, and the two vehicles are placed at a default distance of 1.2 m.
The first set of experiments focused on the verification of the angle deflection model of (9). The front vehicle with the target was rotated anticlockwise and thus caused angle deflections. The relationship between deflection angles   . 13 The temporal performance of the coordination system (ground truth) and calculated angles is recorded and shown in Table 1. From this experiment, we can observe that if the deflection angle increases, the number of pixels at the top, bottom and left side in the obtained image decreases, while the number of pixels at the right side increases. By comparing the calculated value α and measured deflection angle, we can conclude that the theoretical analysis in Sect. 3.4 matches the results from practical experiments. The second set of experiments was committed to verifying the displacement model of (10) and (12). We placed the rear vehicle at different places and recorded α, Δx and Δy . All the measured values and calculated values are shown in Table 2. The average offset error between the two vehicles is approximately 3 ∼ 10 mm towards the central line and meets the production requirement (SIASUN 2014). Figure 13 shows a segment of real-time experiment until the system reaches the steady state. In the figure, the red line denotes the deflection angle α, the black line denotes the offset Δx , and the green line denotes the offset Δy . By applying the proposed system, the angle deflection was corrected to close to 0 • from 15 • , and the displacement Δx and Δy was also adjusted to close to 0 mm from 400 mm and 70 mm. The results confirm the performance of the developed coordination system.

Conclusion
This paper introduces the design and test of a novel intelligent correction system for creating omnidirectional mobile platforms, and the main idea of the system is to use an image processing algorithm to estimate the body offsets and rotation angle between two vehicles and apply a Kalman filter and a PID controller to adjust the rear vehicle to mitigate the deviation. The system has been tested in real-world transport scenarios, which confirmed that the dual-vehicle coordination system is able to accurately detect angle, lateral and longitudinal offset between the two vehicles, and correct them efficiently using the controller. The developed dual-vehicle omni-directional system has been applied in a bullet train production line to transport train carriages that are particularly challenging to normal omni-directional vehicles because of their colossal size and weight and high requirement on coordinated movement. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.