A Monocular Vision Measurement System of Three-Degree-of-Freedom Air-Bearing Test-Bed Based on FCCSP

A monocular vision-based pose measurement system is provided for real-time measurement of a three-degree-of-freedom (3-DOF) air-bearing test-bed. Firstly, a circular plane cooperative target is designed. An image of a target fixed on the test-bed is then acquired. Blob analysis-based image processing is used to detect the object circles on the target. A fast algorithm (FCCSP) based on pixel statistics is proposed to extract the centers of object circles. Finally, pose measurements can be obtained when combined with the centers and the coordinate transformation relation. Experiments show that the proposed method is fast, accurate, and robust enough to satisfy the requirement of the pose measurement.


Introduction
An air-bearing test-bed is a unique device that is used for ground test of satellites and other spacecrafts [1]. This test-bed can simulate the space-based activities of microgravity and zero friction in a space environment [2]. The resultant full physical simulation method plays a very important role in verifying the control modes and actual performance of spacecrafts. Therefore, the real-time accurate measurement of the pose of the air-bearing test-bed is an important prerequisite for the correlative control.
Because of the particularity of the test, no physical connection between the ground equipment and the air-bearing test-bed is allowed. The position and attitude information thus must be obtained by a non-contact method. At present, angular velocity sensors [3], star sensors [4], and rate gyroscopes [5] are used to perform attitude measurements. The complex mechanical structures and sensor systems used in these methods are complicated and difficult to maintain. The Jet Propulsion Laboratory of the California Institute of Technology determines the attitude of their air-bearing test-bed by establishing an indoor roof light-emitting diode (LED) star map [6]. But it costs too much.
Three degree-of-freedom and more degree-of-freedom simulation test-bed can simulate space vehicle position and attitude motion, and can Photonic Sensors 120 also be in full physical simulation in the form of control law and other aspects of space vision measurement verification on the ground. A large number of literatures on different forms of movement and attitude movement were studied. Reference [8] studied the attitude control of air-bearing test-bed with three degrees of freedom. Reference [9] studied the measurement system, the optical system, the control system, and the mechanical system are introduced from the structure of the three degrees of freedom air-bearing test-bed. Reference [10] studied the satellite attitude control ground laboratory platform. Reference [11] proposed several algorithms to estimate the quality of air-bearing test-bed, and the validity of the method was tested by the experiment.
In order to measure the pose of a three-degree-of-freedom (3-DOF) air-bearing test-bed simply and at a low cost, a visual measurement system is presented here. Firstly, a cooperative target is designed. Next, a method is proposed for recognition of the object circles on the target. Then, a fast algorithm for central extraction called fast circle central extraction algorithm based on statistics of pixels (FCCSP) is provided to calculate the circle center and can also meet the real-time requirements of the system. The proposed system can meet the real-time performance and accuracy requirements. When compared with existing attitude determination schemes, the system is faster than existing systems and is robust against rotation and translation. The cheap charge-coupled device (CCD) camera used in this system is convenient too.

Proposed method
The pose measurement system is shown in Fig. 1. The system mainly consists of the marble platform, the air-bearing table, the cooperative target, a CCD camera, and a host computer. The 3-DOF air-bearing table floats on the marble platform using three planar air bearings, which can translate in the X and Y directions and rotate around the yaw axis. The cooperative target is placed on the test-bed such that it has the same pose as the test-bed. The CCD camera then captures an image that includes the cooperative target. The host computer then measures the pose based on the centers of the object circles and the coordinate transformation relationship.

Design of the cooperative target
The cooperative target consists of three white object circles and a black background area, as shown in Fig. 2. The white object circles are used to recognize the target and calculate their centers. The black background area maintains color consistency with the black marble platform and delivers strong contrast to the white object area. The central lines of the three object circles constitute an isosceles right triangle.

Object circles recognition
The methodology for the object circles on the Zhanyu GAO et al.: A Monocular Vision Measurement System of Three-Degree-of-Freedom Air-Bearing Test-Bed Based on FCCSP 121 cooperative target recognition algorithm is shown in Fig. 3. In the first step, a median filter is used to remove the potential noise from the images. In the second step, an improved Otsu algorithm [12] is used to automatically obtain the appropriate threshold. Following the image segmentation, eight connected domains and the corresponding boundary chain-code can be extracted using blob analysis [13]. In this paper, the eight-direction boundary chain-code can be used to calculate some descriptive parameters for the connected domains, as shown in Fig. 4. The coordinates R i (x, y) and the number n of boundary points can be obtained by encoding and decoding. Therefore, the required descriptive parameters for the following step can be calculated as follows: (1) Width and height: However, some interferential connected domains are still present after blob analysis is performed. To increase the pose measurement accuracy, a blob analysis-based method is presented to detect the white object circles on the cooperative target. In the measurement system, the cooperative target dimensions and the camera position are fixed. Therefore, the width, height, and aspect ratio of the object circle domains and the distance to these domains should be within a certain range. These blobs, for which both width and height are out of range, can then be ignored. In this paper, this range is established using a combination of practice and experience. The object circle domains can be identified preliminarily using the following expression: The distance constraint can be used to increase the white object circle detection accuracy. It is a complex and time-consuming task to calculate all distances for each blob. Therefore, a rectangular window is set around the blob as shown in Fig. 5 (take blob A for example), to eliminate any interferential discrete blobs lying at greater distances and to reduce the computational cost.
In the ideal case, the central lines of the object circles constitute an isosceles right triangle, and thus the length of the hypotenuse is 2L . Because of the camera distortion and the error of the domain center, which is calculated using the boundary chain-code, the rectangular window length can be set to 3L to contain all the object circles. The center of the rectangle coincides with the domain to be detected. When the number of blobs in the rectangular window is three, the distances between these blobs will be calculated. If any distance l satisfies 0.8L < l < 1.6L, then these blobs H p (where p = 1, 2, 3) can be identified as the white object circles on the cooperate target.

Proposed center extraction method
To obtain the pose of the cooperative target, the centers of these circles must now be calculated. Traditional algorithms, such as the Hough transform and edge-based methods, consume more memory and tend to run slowly. To meet the real-time requirements of the system, a fast circle central extraction algorithm based on statistics of pixels (FCCSP) is proposed.
Firstly, the new domains BW p can be defined based on the minimum enclosing rectangles for H p , which are obtained in the previous step. The BW p use the minimum enclosing rectangles as their centers and expand outwards along four directions about a single pixel, as shown in Fig. 6. The BW p are then processed by row and column scanning. The number of pixels m j with a value of 1 in the jth row is stored in the array m. Similarly, the number of pixels n k that have a value of 1 in the kth column is stored in the array n. Suppose that (t 1 , s j ) is the coordinate of the first pixel point that has a pixel value of 1 in the jth row and (t 2 , s j ) is the last. In the image coordinates system, the center can be calculated using where (o x , o y ) is the circle center, and r is the radius. Additionally, m j can also be obtained from t 1 and t 2 : By combining (2) with (3), the connection between m j and o y can be derived as follows:  The least-squares method is then used to estimate the parameters 0 c , 1 c , and 2 c , and thus the sum of the squares of the deviations S can be defined as follows: Equation (8) can then be solved using the known parameters M j and s j . Then, the coefficients c 0 , c 1 , and c 2 can be calculated. When ~j M reaches its maximum value, the number of pixels m j is also at a maximum, which means that the jth row passes the center of the circle. The ordinate of the center o y is given by With the FCCSP, the centers of circles A, B, and C can be calculated by scanning each BW p . The efficiency is also greatly enhanced when compared with other previous methods because there is no need to deal with the entire image.

Pose measurement
Through extraction of the centers of the object circles, a target coordinates system can be established. The different circles can be distinguished using the descriptive parameters mentioned earlier. The center of circle A is considered to be the original point. The line that connects the centers of A and B is regarded as the Y-axis. The X-axis is perpendicular to the Y-axis in the direction towards circle C, as shown in Fig. 7.
To date, the center and the angle of the target can be obtained using the relative positioning between the target coordinates system and the image coordinates system. The transformation relationships among the coordinates of the measurement system can be established using previously proposed methods [14]. The actual pose of the air-bearing test-bed can be calculated using the coordinate transformation and the pose in the image coordinates system. Fig. 7 Establishment of the target coordinates system.

Experimental results
To verify the performance of the proposed method, experiments are performed in MATLAB R2013b on a computer with an Intel Core 2 Duo central processing unit operating at 3.7 GHz and 4 GB of memory. We calibrate the camera using the self-calibration method [15]. Figure 8 shows the experimental results at each step of the image captured in the real environment.
The synthetic images shown in Fig. 9 show the different poses of the cooperative target, which can provide exact data for comparison and simulate an actual environment by adding noise.
The average time is spent on extraction of the circle centers of each image in Fig. 9 using a Hough transform, the method in [16], and the FCCSP, with results as shown in Table 1. A comparison between the results for the actual poses and the calculated poses when using the proposed method is shown in Table 2. The maximum errors for the center and the angle are 0.036 pixels and 0.059˚, respectively. Many sets of data show that the accuracy of this algorithm is slightly higher than the Hough transform and method proposed in [16], and our algorithm uses the shortest time, which is the most important. These results prove that the proposed method offers higher efficiency than the earlier methods and ensures the accuracy of the central extraction. And the proposed algorithm is faster than the existing methods and is robust against rotation and translation. Sub-pixel level measurement accuracy is attained for the target pose and satisfies the system requirements.

Conclusions
A real-time pose measurement system based on monocular vision is proposed here. A cooperative target is designed to measure the positioning and orientation of the air-bearing test-bed. A target recognition method based on blob analysis is presented and used to detect white object circles. The FCCSP algorithm is used to extract the centers of the circles quickly and accurately. Finally, the target pose can be calculated. Experimental results show that the FCCSP algorithm is faster than the existing methods and is robust against rotation and translation. Sub-pixel level measurement accuracy is attained for the target pose and satisfies the system requirements. In addition, the CCD camera used in this system is less costly and more convenient for use in the air-bearing test-bed.