1 Introduction

The omnidirectional mobile platform is widely used in industrial, military, aerospace and other fields of transportation due to its high flexibility and versatility (Rojas and Föorster 2005). In the field of manufacturing, transportation platforms are widely used in all aspects of producing products. In comparison with normal wheeled vehicles, although omnidirectional wheel movements are more complicated, they are able to travel through very narrow and complicated spaces and their movements are not easily constrained by the terrain surface or obstacles and are thus better choices in many industrial applications (Yamada et al. 2017; Hacene and Mendil 2019; Qian et al. 2017).

It was common practice for tracked transport platforms to transport heavy loads. It had good load-carrying capacity but was limited by flexibility. For example, D A Carnegie and others have invented an outdoor track transport platform with excellent load capacity. There have been a number of projects aiming to develop omnidirectional mobile platforms. In military areas, National Aeronautics and Space Administration (NASA) invented OmniBot, which has been used to perform remote inspections, site surveys and operations in hazardous areas (Houshangi and Lippitt 1999). Omnix Technology Systems, Inc., built an omnidirectional vehicle for the US Navy to inspect the areas that are unreachable to humans and normal vehicles as well as transport heavy goods in military environments. In the field of civil manufacturing, US AirTrax Inc. produced the Universal Walking Forklift, which excels in the applications requiring tight maneuver or transporting long goods through standard-sized doors or narrow aisles (Florentina and Ioan 2011). Schulze developed an automated guided vehicle, which was optimized for the transportation of small payloads and has been used in civilian products, such as wheelchair and shopping carts (Schulze et al. 2010). Guo developed a new motorized robotic bed mover with omnidirectional mobility; this device is more ergonomic, more effective and safer than the conventional powered bed (Guo et al. 2018). In the field of education, Uranus was the first mobile robot with Mecanum wheels, designed by Carnegie Melon University and used to support research on the navigation of indoor robots (Muir and Neuman 1987). Hence, omnidirectional mobile platforms play an essential role in many applications.

Although the existing omnidirectional mobile platform shows excellent mobility, with the development of industry and manufacturing, the cargo is also larger and heavier. It is difficult for an omnidirectional mobile platform to transport such oversize and overweight cargo. Therefore, in many industrial applications, dual omnidirectional mobile platform transportation is a better choice. However, in the traditional two-car system, people often need to control the two delivery platforms at the same time. Manual operation will cause problems of low accuracy and poor synchronization. The cable-connected two-car system is limited by the transportation distance and flexibility (Morishita et al. 2018). This limitation can be overcome by coordinating two non-physically connected vehicles to create an omnidirectional system based on real-time synchronization and precise control.

The use of vision to estimate distance and angle has been is widely researched in recent years. A location estimation system includes a plurality of Kalman filters, a UWB position system, a pressure sensor, a temperature sensor and a MEMs chip that provides gyroscope, accelerometer and magnetometer information (Hamilton 2018). Lucas and others solved the problem of estimating the location and size of wildfires through visual cameras, installed in the frame of a semi-autonomous reconnaissance and data analysis quadcopter (de Paula et al. 2019). Over the last decade, harnessing the commercial potential of smart mobile devices in indoor environments has spurred interest in indoor mapping and navigation (Sithole and Zlatanova 2016). Therefore, we use the visual camera method to calculate the deviation to achieve the problem of double-car following.

This paper proposes and implements an omnidirectional dual-vehicle coordination system (Fig. 1). The overall system consists of two omnidirectional platforms supported by Mecanum wheels, a remote controller for coordinating movements of the two vehicles, an automatic correction module and a wireless communication module based on ZigBee protocol. In a normal industry environment, ground friction and non-identical mechanic systems can cause speed and direction deflections between the two vehicles (Pasterkamp and Pacejka 1997). Therefore, we propose an automatic correction system based on image deviation prediction correction, which uses cameras and sensors to measure distance and alignment, and calculates displacement and deflection angles through computer vision algorithms. If deflection occurs, the PID controller will adjust the two vehicles’ positions and movements rapidly. This design solves the challenge to transport oversized and overweighted objects freely in a normal industry environment. The developed system has been tested and applied in a bullet train production line shown in Fig. 2.

Fig. 1
figure 1

The dual-vehicle platform, where each vehicle moves independently and is coordinated by the proposed control system

Fig. 2
figure 2

Implementation of the dual-vehicle system (A 25-ton bullet vehicle carriage is being transported)

This article is an extension of The Design of Intelligence Correction System in Double Coordinate Robot Carry Platform published at the 2015 4th International Conference on Computer Science and Network Technology (ICCSNT 2015) (Zhou et al. 2015). Compared with published conference articles, this paper adds a detailed system structure, improves the detection algorithm, introduces a Kalman filter and supplements the test data.

The rest of this paper is organized as follows: Sect. 1 introduces the overall system design. Section 2 describes the implementation of the system and its key components. Section 3 provides experiment results to validate the system. Section 4 concludes this paper.

2 Dual-Vehicle Coordination System Design

2.1 Overall System Design

Figures 3 and 4 illustrate the overall system architecture and the vehicle platform. The dual-vehicle coordination system includes a remote controller module, a core controller module, a vehicle body module and an automatic correction module. The core controller and remote controller are both implemented using STM32F4 as the core processing unit and communicate through the wireless communication link ZigBee protocol. The remote controller controls the movement of the vehicle, e.g., start, stop and moving in any direction, through a keyboard or a joystick. The automatic correction module integrates a referencing target image on the front vehicle for deflection estimation, a laser distance meter and an image acquisition and processing system. The automatic correction module employs TMS320DM642 as core processor for processing the target image and sending the data to the controller. The core controller collects data from the remote module and signal processor and then runs the Kalman filter and PID controller to eliminate the deflections of speed and direction through adjusting the rotation speed of motors (Mamun et al. 2018; Oo et al. 2019; Wu et al. 2017).

Fig. 3
figure 3

Block diagram of the dual-vehicle coordination system

Fig. 4
figure 4

The rear vehicle platform of the dual-vehicle system

2.2 Omnidirectional Mobile Platform Creation

There are three choices (see Fig. 5) available for implementing omnidirectional platforms: swerve, holonomic and Mecanum drive (Tatsuro et al. 2018; Park et al. 2010; Shabalina et al. 2018). Swerve drive is a 4-wheel drive, where each wheel has independently steered driving modules. It has a simple structure but is difficult to program and control. The holonomic drive consists of four omni-wheels and is easy to control but has low traction capacity. Mecanum drive consists of three or four Mecanum wheels and has been widely used in omni-directional platforms because it is simple to build, program and control, e.g., smart robots and wheelchairs (Williams et al. 2002; Salih et al. 2006).

Fig. 5
figure 5

Omnidirectional driving wheel types

Fig. 6
figure 6

The structure of Mecanum wheel

Fig. 7
figure 7

The omnidirectional mobile platform sketch

This project also employs Mecanum drive with four Mecanum wheels (Fig. 6) (Lin and Shih 2013) to develop a platform that can move freely and rotate coordinately around its central point simultaneously—even in small and narrow spaces. The platform has a rectangular layout, with two wheels on each side of the body; see Fig. 7. The radius of a wheel is R, and L denotes half length of the vehicle body. We introduce a global coordinate system with its origin at the central point O of the mobile platform (Shi et al. 2007). The speed of the center point and the revolution velocity of the wheel are defined as \([v_x, v_y, \omega _z]^T\) and \({\omega }_i,\; i=1,\ldots ,4\), respectively. Each wheel is driven by a separate direct current (DC) servomotor and combination of the four-wheel movement provides three degrees-of-freedom (Liu and Wu 2011). The relationship of speeds between the vehicle and wheels is below, so that the vehicle body can move in any direction by adjusting \([v_x, v_y, \omega _z]^T\).

$$\begin{aligned} \begin{bmatrix} {\omega }_1\\ {\omega }_2\\ {\omega }_3\\ {\omega }_4 \end{bmatrix}=K\begin{bmatrix} v_x\\ v_y\\ \omega _z \end{bmatrix}=\frac{1}{R}\begin{bmatrix} 1 &{} 1 &{} -(l_1+l_2)\\ 1 &{} -1 &{} (l_1+l_2)\\ 1 &{} -1 &{} -(l_1+l_2)\\ 1 &{} -1 &{} (l_1+l_2) \end{bmatrix} \begin{bmatrix} v_x\\ v_y\\ \omega _z \end{bmatrix} \end{aligned}$$
(1)

where K is the inverse kinematics Jacobian matrix. It represents the mapping relationship between the joint velocity and speed of the wheels.

3 Dual-Vehicle Coordination System Implementation

3.1 Analysis of System Workflow

This section introduces the system’s software process workflow and real-time scheduling process of the system. The system is developed under the environment of Keil MDK 5 and CCS 5.5 The processing flow of the system is shown in Fig. 8. When the system is initialized, the initial position is calibrated first, and the position information is acquired through the vision system of the following vehicle, and the alignment is continuously adjusted to achieve the ideal initial state. So far, the initialization of the system is completed, and the system control and real-time tracking of the two vehicles will be started.

Fig. 8
figure 8

Software workflow of system operation

After reaching the ideal initial state, the two-car system can wait for the controlled instructions to transfer and transport goods. The front and rear vehicles are in the same system, but the controller only controls the vehicle in front at this stage, and the vehicle behind is autonomously following. The front car moves under the control of the controller, and the following car is continuously adjusted through the automatic calibration module. The calibration process is the same as the calibration in the initial state. Through the vision system, the target picture is continuously obtained, the deviation information is calculated, the motor is driven to move, and the system is continuously adjusted to follow the system. It should be noted that if the system has a rigid cargo load, the automatic calibration of the following vehicle is easier to achieve due to the effect of friction. In this case, due to the slow movement speed of the system, the computing speed of the algorithm is much higher than the running speed of the vehicle and can realize real-time automatic adjustment.

3.2 Kinematics Modeling Analysis

We refer to the article He (2018) to model the omnidirectional mobile system and ideally establish a coordinate system as shown in Fig. 9

Fig. 9
figure 9

Coordinate system of the platform

We establish the motion analysis coordinate system shown in Fig. 9. The origin of the OXY coordinate system is fixedly connected to the motion center of the platform, and the X and Y axes are parallel to the work plane. At the same time, we define the Mecanum wheel coordinate system \(\sum {O_i} (i=1,2,3,4).\) The X and Y axes are parallel to the ground coordinate system and are consistent with the direction of the vehicle body coordinate system.

2X represents the center distance of the platform wheel in the long direction, 2Y represents the center distance of the wheel in the width direction, and the rolling angular velocity of the four Mecanum wheels is, respectively, recorded as \(\omega _i (i=1,2,3,4)\). \(V_X\) represents the translation speed of the X axis, \(V_Y\) represents the translation speed of the Y axis, \(\omega _o\) represents the angular velocity of the platform spinning, and \(\alpha \) is the angle between the axis of the small roller and the axis of the wheel. According to the referenced article He (2018), we get the following kinematic modeling results in Eq. (2).

$$\begin{aligned} \left\{ \begin{array}{ccc} V_x\\ V_y\\ \omega _o \end{array} \right\}&= \frac{R}{4(x+y)} \times M_1 \times \begin{Bmatrix} \omega _1\\ \omega _2\\ \omega _3\\ \omega _4 \end{Bmatrix} \end{aligned}$$
(2)
$$\begin{aligned} M_1&= \begin{Bmatrix} (x+y)&-(x+y)&(x+y)&-(x+y)\\ (x+y)&(x+y)&(x+y)&(x+y)\\ -1&1&1&-1 \end{Bmatrix} \end{aligned}$$
(3)

3.3 Principle of Deflection Correction and Coordination

The dual-vehicle system is designed to transport oversized and/or overweighted goods by coordinating two omni-directional vehicles to maintain a stable and constant distance of \(D_0\) and carry out coordinated rotation which ensures the relative positions of the two vehicles remaining constant. In practice, the two vehicles may have deflections caused by their relative motions and uneven ground conditions. We denote the transverse offset as \(\varDelta _x\) and the longitudinal offset as \(\varDelta _y\). These deflections are corrected by the PID controller introduced in the next subsection.

Fig. 10
figure 10

The imaging system

Figure 10 introduces the imaging system that is essential to deflection correction. The rectangle target image is attached to the rear of the front vehicle, which is captured by the camera mounted on the rear vehicle. The actual width and height of the target image are 225 mm and 158 mm, and W and H denote the width and height, respectively. \(D_l\), \(D_r\) and \(D_t\) denote the distance from the left, right and top edge of target to the camera; \(h_l\), \(h_r\) and \(w_t\) denote the length of the left, right and top edge of imaging plane in CCD, measured in mm, respectively. The following relationships can be established:

$$\begin{aligned} \left\{ \begin{array}{l} h_l=H \times F/D_l \\ h_r=H \times F/D_r \\ w_t=W \times F/D_t \end{array} \right. \end{aligned}$$
(4)

where F is the focal length. Through the laser distance meter, we obtain the measured distance \(D_1\) between two vehicles. The image acquisition and processing module obtains the following values: the number of pixels on the left, right and top side of the target \(M_L\), \(M_R\) and \(M_T\) (the number of pixels at the top and bottom should be the same because of no significant elevation deviation), and the dislocated pixels \(M_C\) from the central point. The imaging system measures the number of pixels at each edge of the target picture and stores them in the system as reference (Fig. 11). We draw the relevant relationship between the two vehicles in Fig. 12. In order to facilitate the mathematical analysis and modeling, the center of the rear vehicle’s front panel is considered as the original point to build a new coordinate system \(x'oy'\). The deflecting angle is \(\alpha \), the lateral error is \(\varDelta x'\), and the longitudinal error is \(\varDelta y'\).

Fig. 11
figure 11

The captured reference image example under three kinds of deviation conditions

Fig. 12
figure 12

Deflections between the two vehicles

3.4 Vehicle Body Deflection Measurement

This subsection introduces the algorithms for calculating the three parameters: \(\alpha \), \(\varDelta _x\) and \(\varDelta _y\), which are sent to the control module to adjust against deflection. The focal distance formula could not be directly applied because of the deflection angle between the two vehicles. The equivalent length of the top side of the target picture is \(L_x\), and the distance between the target and lens is \(D_x\). According to imaging properties and the law of cosines, length \(L_x\) can be expressed as:

$$\begin{aligned} L_x=\frac{4D_x^2W\cos \alpha }{4D_x^2-W^2\sin ^2\alpha } \end{aligned}$$
(5)

In (4), we can replace W and \(D_t\) with \(L_x\) and \(D_x\), respectively. Then, we get the focal length F as follows,

$$\begin{aligned} F=\frac{w_tD_x}{L_x}=\frac{w_t\left( 4D_x^2-W^2\sin ^2\alpha \right) }{4D_xW\cos \alpha } \end{aligned}$$
(6)

where \(\alpha \) is the shifting angle of the rear vehicle and H is the length of the top edge of the target. Similarly, the focal length F can also be derived by the length of the vertical edges:

$$\begin{aligned} \left\{ \begin{array}{l} F=\frac{h_l\left( D_x-\frac{W\sin \alpha }{2}\right) }{H} \\ F=\frac{h_r\left( D_x+\frac{W\sin \alpha }{2}\right) }{H} \end{array} \right. \end{aligned}$$
(7)

Because the number of pixels is proportional to the length of the image, the following relationship can be obtained based on (7):

$$\begin{aligned} \frac{M_L}{M_R}=\frac{h_l}{h_r}=\frac{D_x+\frac{W\sin \alpha }{2}}{D_x-\frac{W\sin \alpha }{2}}=1+\frac{W\sin \alpha }{D_x-\frac{W\sin \alpha }{2}} \end{aligned}$$
(8)

We can combine (6) and (8) to eliminate \(D_x\) and find the relationship between the number of pixels and the deflection angle \(\alpha \) as follows:

$$\begin{aligned} \tan \alpha =\frac{F\left( \left( \frac{M_L}{M_R}\right) ^2-1\right) }{w_t\left( \frac{M_L}{M_R}+1\right) ^2-\left( \frac{M_L}{M_R}-1\right) ^2} \end{aligned}$$

This paper uses an image sensor with a 1/4 inch semiconductor charge-coupled device (CCD) as an example, which has a physical size of \(3.2\,\mathrm{mm}\times 2.4\,\mathrm{mm}\) and a pixel resolution of \(720\times 576\). It is easy to obtain the following equation:

$$\begin{aligned} \frac{3.2}{720}=\frac{w_t}{M_T} \end{aligned}$$

Combination of the above two equations leads to a closed-form solution for calculating the deflection angle:

$$\begin{aligned} \alpha =\arctan \frac{225F\left( \left( \frac{M_L}{M_R}\right) ^2-1\right) }{M_T\left( \frac{M_L}{M_R}+1\right) ^2-\left( \frac{M_L}{M_R}-1\right) ^2} \end{aligned}$$
(9)

where the parameters \(M_L\), \(M_R\) and \(M_T\) can be obtained directly from the image processing module.

The lateral offset \(\varDelta x\) can be obtained as follows. We denote the horizontal offset from the center of the target as \(\varDelta b\) (as shown in Fig. 12), and \(D_1\) is distance between the two vehicles measured by the laser distance meter. Based on the relationship between pixels and distance, we get the following equation.

$$\begin{aligned} \frac{\varDelta b}{225}=\frac{M_c}{M_T} \end{aligned}$$

With the help of the auxiliary lines in Fig. 12, we can obtain:

$$\begin{aligned} \varDelta x{'}= & {} \varDelta b\cos \alpha - D_0\sin \alpha = \frac{225{M_C}\cos \alpha }{M_T}-{D_0}\sin \alpha \nonumber \\ \end{aligned}$$
(10)
$$\begin{aligned} \varDelta x= & {} \varDelta x{'} + {L}\sin \alpha \end{aligned}$$
(11)

where the parameter L denotes the half length of the vehicle body (as shown in Fig. 7).

The third parameter, \(\varDelta y\), can be calculated as follows:

$$\begin{aligned} \varDelta y{'}= & {} D_1- {D_0}\cos \alpha - \varDelta b\sin \alpha \nonumber \\= & {} D_1- {D_0}\cos \alpha - \frac{225{M_C}\sin \alpha }{M_T}\end{aligned}$$
(12)
$$\begin{aligned} \varDelta y= & {} L- {L}\cos \alpha + \varDelta y{'} \end{aligned}$$
(13)

3.5 Kalman Filter and PID Controller

PID is an algorithm widely used in industrial control (Freire et al. 2018; Chopade et al. 2016). Given the mathematical model of the relationship between pixels and offset, the exact deflection can be detected and accurately calculated by the controller. If it is greater than the threshold value, the core controller will coordinate the motors on the Mecanum vehicle to correct the deflection. The three variables \([v_x, v_y, \omega _z]\) are the key control parameters. In order to response rapidly and accurately, we use Kalman filter and PID algorithm (Han et al. 2009; Skoczowski et al. 2005) to adjust the vehicle body according to the obtained deflection values: \([\varDelta x, \varDelta y, \alpha ]\).

Kalman filters have received much attention in robot localization, tracking, motion control, estimation and prediction (Chen 2012). In this paper, Kalman filter is employed to predict the deflection of the rear vehicle.

We can model the problem in this project using Kalman filter as follows:

$$\begin{aligned} {\hat{X}_{k|k-1}}= & {} {\hat{X}_{k-1}} \end{aligned}$$
(14)
$$\begin{aligned} {{P}_{k|k-1}}= & {} {{P}_{k-1}}+{Q} \end{aligned}$$
(15)

where \(\hat{X}_{k-1}\) is the optimal estimate of deviation in the last step, \(\hat{X}_{k|k-1}\) is the prediction at the future step k. \({P}_{k|k-1}\) and \({P}_{k-1}\) are the covariance of \({X}_{k|k+1}\) and \({X}_{k-1|k+1}\), respectively, and Q is the process noise covariance.

During the measurement update, the Kalman gain \(K_{gk}\) is computed by the following equations:

$$\begin{aligned} {K}_{gk}= & {} {{P}_{k|k-1}}({{P}_{k|k-1} + R})^\mathrm {-1} \end{aligned}$$
(16)
$$\begin{aligned} {\hat{X}_{k|k}}= & {} {\hat{X}_{k|k-1}}+{K}_{gk}({Z}_{k}-{X}_{k|k-1}) \end{aligned}$$
(17)
$$\begin{aligned} {{P}_{k|k}}= & {} ({I} - {K}_{gk}){{P}_{k|k-1}} \end{aligned}$$
(18)

where R is the covariance of the observation noise.

Starting from an initial estimate of the covariance matrix(Q: 0.05; R: 0.1) and using the measured deviation and the predicted value, an a posteriori state estimate (the optimal current state estimate) is generated by incorporating the measurement as in (17). The error covariance estimate is updated by (18).

PID controller has been commonly used in industrial control systems. The controller aims to decrease the difference between a desired value and a measured value (Wen and Tong 2017). The posterior estate \(X_{k|k}\) is used to feed the PID controller, which can be updated in a real-time recursive fashion. The PID expression of the continuous position type is as follows Eq. (19):

$$\begin{aligned} u(x)={K_p}\left( e(t) + \frac{1}{T_i}\int _{0}^{t} e(t)\, \mathrm{d}t + {T_d}\frac{\mathrm{d}e(t)}{\mathrm{d}t} \right) \end{aligned}$$
(19)

In the proposed system, we use incremental PID for discretization as shown in Eq. (20):

$$\begin{aligned} u(k)={K_p}e(k)+\frac{{K_p}T}{T_i}\sum _{n=0}^ke(n)+\frac{{K_p}{T_d}}{T}(e(k)-e(k-1)) \end{aligned}$$
(20)

In our modeled discrete system, after a series of simplifications, the control variable is determined by the following Eq. (21):

$$\begin{aligned} u(k)={K_p}e(k)+ {K_i}\sum _{n=0}^ke(n)+{K_d}(e(k)-e(k-1)) \end{aligned}$$
(21)

where u(k) denotes the output motor speed at time k, \(K_p\) is the proportional coefficient, \(K_i\) is the integral coefficient, \(K_d\) is the differential coefficient and e(k) is the deviation between the front and rear vehicle at time k.

Of particular importance is to obtain the accurate values of \(K_p\), \(K_i\) and \(K_d\), which are essential to the controller. After numerous experiments, we obtained a set of those parameter values (\(K_p\): 16; \(K_i\): 0.3; \(K_d\): 10) which ensure the automatic coordination system to correct itself precisely and quickly.

4 Experiment Results and Discussion

This section evaluates the designed dual-vehicle coordination system through practical experiments in a real world environment. This project uses an image sensor with a 1/4 CCD made by SONY, which has a physical size of \(3.2\times 2.4\) mm and the pixel resolution \(720*576\). The length and height of the target image mounted on the front vehicle are 225 mm and 158 mm, respectively. In the experiments, the camera is fixed to \(4\times \) zoom which equals 12.8 mm, and the two vehicles are placed at a default distance of 1.2 m.

Table 1 The experiment results of deflection angles and estimated values
Table 2 The experiment results of transverse and longitudinal offsets

The first set of experiments focused on the verification of the angle deflection model of (9). The front vehicle with the target was rotated anticlockwise and thus caused angle deflections. The relationship between deflection angles (ground truth) and calculated angles is recorded and shown in Table 1. From this experiment, we can observe that if the deflection angle increases, the number of pixels at the top, bottom and left side in the obtained image decreases, while the number of pixels at the right side increases. By comparing the calculated value \(\alpha \) and measured deflection angle, we can conclude that the theoretical analysis in Sect. 3.4 matches the results from practical experiments.

The second set of experiments was committed to verifying the displacement model of (1011) and (1213). We placed the rear vehicle at different places and recorded \(\alpha \), \(\varDelta x{'}\) and \(\varDelta y{'}\). All the measured values and calculated values are shown in Table 2. The average offset error between the two vehicles is approximately \(3\sim 10\) mm towards the central line and meets the production requirement (SIASUN 2014).

Fig. 13
figure 13

The temporal performance of the coordination system

Figure 13 shows a segment of real-time experiment until the system reaches the steady state. In the figure, the red line denotes the deflection angle \(\alpha \), the black line denotes the offset \(\varDelta x{'}\), and the green line denotes the offset \(\varDelta y{'}\). By applying the proposed system, the angle deflection was corrected to close to 0\(^{\circ }\) from 15\(^{\circ }\), and the displacement \(\varDelta x{'}\) and \(\varDelta y{'}\) was also adjusted to close to 0 mm from 400 mm and 70 mm. The results confirm the performance of the developed coordination system.

5 Conclusion

This paper introduces the design and test of a novel intelligent correction system for creating omnidirectional mobile platforms, and the main idea of the system is to use an image processing algorithm to estimate the body offsets and rotation angle between two vehicles and apply a Kalman filter and a PID controller to adjust the rear vehicle to mitigate the deviation. The system has been tested in real-world transport scenarios, which confirmed that the dual-vehicle coordination system is able to accurately detect angle, lateral and longitudinal offset between the two vehicles, and correct them efficiently using the controller. The developed dual-vehicle omni-directional system has been applied in a bullet train production line to transport train carriages that are particularly challenging to normal omni-directional vehicles because of their colossal size and weight and high requirement on coordinated movement.