An Object Association Matching Method Based on V2I System

Vehicle-to-infrastructure (V2I) is one of the effective ways to solve the problem of intelligent connected vehicle perception, and the core is to fuse the information sensed by vehicle sensors with that sensed by infrastructure sensors. However, accurately matching the objects detected by the vehicle with multiple objects detected by the infrastructure remains a challenge. This paper presents an object association matching method to fuse the object information from vehicle sensors and roadside sensors, enabling the matching and fusion of multiple target information. The proposed object association matching algorithm consists of three steps. First, the deployment method for vehicle sensors and roadside sensors is designed. Then, the laser point cloud data from the roadside sensors are processed using the DBSCAN algorithm to extract the object information on the road. Finally, an improved single-pass algorithm for object association matching is proposed to achieve the matched target by setting a change threshold for selection. To validate the effectiveness and feasibility of the proposed method, real-vehicle experiments are conducted. Furthermore, the improved single-pass algorithm is compared with the classical Hungarian algorithm, Kuhn–Munkres (KM) algorithm, and nearest neighbor (NN) algorithm. The experimental results demonstrate that the improved single-pass algorithm achieves a target trajectory matching accuracy of 0.937, which is 6.60%, 1.85%, and 2.07% higher than the above-mentioned algorithms, respectively. In addition, this paper investigates the curvature of the target vehicle trajectory data after fusing vehicle sensing information and roadside sensing information. The curvature mean, curvature variance, and curvature standard deviation are analyzed. The experimental results illustrate that the fused target information is more accurate and effective. The method proposed in this study contributes to the advancement of the theoretical system of V2I cooperative perception and provides theoretical support for the development of intelligent connected vehicles.


Introduction
Traffic congestion and safety have severely affected economic development and people's lives and can have an impact on traffic flow, thus need to be resolved [1][2][3][4].The vehicle-to-infrastructure (V2I) technology is the key to solving these problems [5][6][7].The V2I refers to the technology for communication and interaction between vehicles and infrastructure.It includes the connection between vehicles and traffic signals, roadside sensors, highway toll systems, road signs and other traffic infrastructure.Through this connection, vehicles can exchange information with the infrastructure to obtain real-time information on traffic conditions, road restrictions, road warnings and navigation instructions, and make the appropriate decisions.Nevertheless, as the basis of the V2I system, the accurate matching of vehicle and infrastructure perceived object information is still inconclusive.With the development of the V2I system, the carrying of high-precision and high-reliability sensors on the vehicle and infrastructure makes the vehicle and the road smarter, the vehicle positioning information acquisition methods are more diverse, and the object recognition is more accurate.At present, sensors commonly used on vehicles include radar, Global Navigation Satellite System (GNSS), inertial navigation system (INS), and camera.Sensors commonly used on the infrastructure include cameras and laser radar [8][9][10][11][12].Different ways of obtaining information result in the diversification of data analysis methods.The obtained vehicle body information includes speed, acceleration, position information, and heading angle.The acquisition methods of position information include mainly inertial navigation, odometers, synchronous positioning and mapping [13,14], GNSS-based real-time kinematic (RTK) positioning methods and light detection and ranging (LiDAR)-based high-precision map matching positioning methods [15,16].The infrastructure commonly uses cameras and laser radar to obtain image and video information and point cloud data to analyze the motion state of the vehicle [17,18].
In terms of the LiDAR sensing method, scholars and many urban traffic management departments have proposed clustering LiDAR point cloud data to analyze possible surrounding objects.The common clustering algorithms include distance-based clustering algorithms and densitybased clustering algorithms.For example, Kim proposed a framework to enhance the deep learning-based classification performance by augmenting the shape information of the LiDAR point cloud [19].Chen et al. analyzed key ingredients of 3D point cloud processing and learning for map creation, localization, and perception [20].Li et al. summarized the milestone 3D deep architectures and the remarkable deep learning (DL) applications in 3D semantic segmentation, object detection, and classification [21].Mahdaoui et al. proposed the k-nearest neighbor (k-NN) clustering algorithm to simplify 3D point clouds and used an entropy estimation to remove the minimal entropy [22].Zhou proposed VoxelNet, a generic 3D detection network that unifies feature extraction and bounding box prediction into a single stage, end-to-end trainable deep network, the point cloud is encoded as a descriptive volumetric representation, and connected to a RPN to generate detections [23].Shi et al. propose PointRCNN for 3D object detection from raw point cloud, including bottoming up 3D proposal generation and refining proposals in the canonical coordinates to obtain the final detection results [24].In addition, many scholars from other industries have also analyzed the processing models of 3D data [25,26].
Due to the influence of the sampling frequency and precision range of sensor equipment, the environmental perception information obtained by a single sensor is often not comprehensive or has low data analysis efficiency.
Therefore, some scholars have studied installing more types of sensors in different locations to perceive the environment and have used different matching algorithms to fuse the perception information of different sensors.For example, Bouain et al. proposed a Multi-Sensor Data Fusion (MSDF) embedded design for vehicle perception tasks using stereo camera and Light Detection and Ranging (LIDAR) sensors, and designed a modular and scalable architecture based on Zynq-7000 SoC [27].Gao et al. presented an object classification method based on a convolutional neural network (CNN) and image upsampling theory for vision and LiDAR fusion, and the method can obtain informative feature representations for object classification [28].Park et al. proposed a deep sensor fusion framework for high-precision depth estimation and addressed the problem of 3D reconstruction from uncalibrated LiDAR point cloud and stereo images [29].Duan et al. constructed the environment perception framework to improve the environmental awareness of autonomous vehicles at intersections by leveraging V2I communication technology [30].Noh et al. presented a cooperative system by V2I communications for data fusion based on situation awareness and distributed reasoning based on situation assessment [31].O'Callaghan et al. described a singlepass streaming algorithm for clustering large data streams effectively, and they provided empirical evidence of the performance of the algorithm on synthetic and real data streams [32].In addition, Ma, LV and Xie et al. scholars have studied the engineering applications of data fusion and feature fusion algorithms [33][34][35].
However, when the clustering result of the dataset is a nonspherical structure, the clustering effect based on distance is poor, but the clustering algorithm based on density can find the clustering of arbitrary shapes.Since it looks for high-density regions separated by low-density regions in the dataset, the separated high-density regions are treated as an independent category.Therefore, we adopt the DBSCAN algorithm clustering the point cloud data in this paper by extracting object information from point cloud data and using a single-pass algorithm to match the vehicle and object information to obtain the surrounding environmental object information of the vehicle in the driving process.
In this paper, we propose an object association matching method to fuse vehicle sensing information and roadside sensing information; it is possible to determine which of the many environmental information is the target vehicle, and this fused information enables the target vehicle to perceive the surrounding environment more comprehensively and precisely to make better decisions.This study will promote the development of autonomous driving technology under the intelligent network connection, and also promote the development of V2I cooperative perception theory research.The main framework structure of the paper is shown in Fig. 1.The main innovations of this paper are summarized as follows: (1) We use an integrated navigation system to collect vehicle sensor information.In addition, we propose a method of clustering the filtered point cloud data obtained by multiple LiDAR sensors located on the infrastructure using the DBSCAN algorithm to obtain the dynamic object information in the driving environment.(2) We propose an improved single-pass algorithm to realize the fusion of vehicle sensing information and infrastructure sensing information, and to associate and match sensing information of both.(3) Finally, we verify the accuracy and effectiveness of the algorithm.
The rest of the paper is organized as follows: the next section introduces our materials and methods, including our vehicle sensing datasets, infrastructure sensing methods and datasets, association matching method and fused datasets.In Sect.3, we give the experimental results and discussion.Finally, the conclusions are addressed.

An Information Collection Method for Vehicle Sensors
The vehicle is equipped with integrated navigation system equipment for collecting vehicle motion information.The integrated navigation system integrates the Global Navigation Satellite System (GNSS) and the Inertial Navigation System (INS) to estimate and measure the motion and pose of the vehicle.The integrated navigation system measures the motion state and pose of the vehicle through the builtin inertial measurement unit and converts the motion state and pose, which is obtained by coupling with GNSS into a global coordinate system.The main working principle is shown in Fig. 2. The integrated navigation system obtains information about the current corresponding position of the vehicle.In this paper, the equipment obtains the latitude, longitude, heading angle, northing speed (longitudinal speed), and easting speed (lateral speed) of the vehicle to calculate the absolute position, driving speed and heading angle of the vehicle.The vehicle position is used for subsequent association and matching with the sensing object of infrastructure lidars, while the driving speed, heading angle, etc., are used to represent the vehicle motion behavior.However, since the acceleration cannot be obtained directly, this paper uses the definition formula of acceleration to calculate the time difference between two frames and the corresponding speed.The calculation equation is as follows: where a t represents the acceleration to be calculated, v t rep- resents the velocity value output at the current moment, v t−1 is the velocity value output at the previous moment, and (1)  ∆t represents the time difference between the two moments (Table 1).
The longitude and latitude obtained in the integrated navigation system used by the vehicle are converted based on the World Geodetic System-1984 Coordinate System (WGS84 coordinate system).For vehicle driving, the driving environment is generally regarded as a two-dimensional plane coordinate system.This paper uses a plane projection coordinate system Universal Transverse Mercator Grid System (UTMGS) to facilitate subsequent calculation and processing.In this paper, the coordinate conversions in other sections are done in this way.
This section mainly introduces the equipment and methods of vehicle information collection.Through the vehicle integrated navigation system, the absolute coordinates, longitudinal velocity, lateral velocity, longitudinal acceleration, lateral acceleration, heading angle and other information of the vehicle at the current moment can be obtained.

Infrastructure LiDAR Sensing Method Based on DBSCAN
The sensors of the infrastructure in this paper are installed on the roadside.The roadside deploys multiple lidars to collect environmental information, perceive the motion state of the object, and complement the surrounding object information that the vehicle cannot perceive.LiDAR relies on the acquired point cloud data to perceive the traffic environment.Since multiple lidars are arranged at different positions, the point cloud data obtained from different viewing angles are inconsistent, so it is necessary to synchronize the spatial relationship among multiple lidars.The positions of multiple lidars are measured by global positioning system (GPS) equipment.Then, the offset calculation is performed to unify the point cloud data obtained by multiple lidars into a certain viewing angle, to ensure obtaining time-synchronized multi-LiDAR point cloud data.In addition, the spatial relationship between the point cloud data and the vehicle sensor information needs to be in the same coordinate system, so the point cloud data are also converted to UTMGS.Since the geographical environment is static information, the original point cloud is filtered by the environment height and lane line distance information.In this paper, the perception range is limited to a fixed area, the point cloud whose height is higher than 1.8 m and lower than 0.5 m is culled, and the environmental information outside the lane is filtered.This paper uses the DBSCAN algorithm to cluster the filtered point cloud data to obtain dynamic object information.
DBSCAN is a density-based spatial clustering algorithm.The algorithm divides regions with sufficient density into clusters and finds clusters of arbitrary shape in a noisy spatial database, which defines a cluster as the largest set of density-connected points.The algorithm inputs the current frame point cloud after filtering and outputs clusters.The DBSCAN algorithm flow is shown in Algorithm 1.First mark all objects in the point cloud P of the current frame as unprocessed 2: for for (all unprocessed points q in NEps(p)) do

12:
C heck its Eps neighborhood NEps(q), if NEps(q) contains at least MinPts objects, add the objects in NEps(q) that are not classified into any cluster into C; After DBSCAN clustering, object information existing in the traffic environment can be obtained, and the information of the object center point and object type is shown in Table 2.These time-synchronized and spatially synchronized onboard sensing data and roadside sensing data can be used for subsequent object association matching algorithms.

Object Association Matching Method Based on a Single-Pass Algorithm
The vehicle motion information is obtained through the vehicle integrated navigation system, and the coordinates and size information of the surrounding dynamic objects are collected through the roadside multi-LiDAR collection device.However, if the infrastructure perception information is sent directly to the vehicle, the experimental vehicle itself and the perceived object information will overlap, so it needs to be associated matched to obtain a vehicle-infrastructure fusion information dataset containing the experimental vehicle.
In a time window, the position of the experimental vehicle p t = {x t , y t } is obtained through the integrated navigation system, and the infrastructure multi-LiDAR sensor obtains multiple perception object sets It is necessary to perform an associative match on the vehicle position p t with the multiple objects in the perception object set C to realize the spatial synchronization fusion of the vehicle position p t in the perception object set C. In the two sets of data, the multi-LiDAR perception object on the infrastructure may lose the perception of the experimental vehicle object, and the position of the experimental vehicle sensed by the onboard integrated navigation will also experience positioning drift.However, onboard perception information and infrastructure perception information can complement each other or match and fuse repeated objects.
The single-pass algorithm is a classical method for clustering streaming text data in the field of text data processing; it has a fast processing speed and is suitable for processing real-time data.The algorithm determines the similarity between the new sample and the existing class.If it is similar enough, the new sample will be put into this class; otherwise, the new sample will be a class of its own.
In this paper, the algorithm processes the position of the vehicle one time window at a time in chronological order.According to the comparison between the current vehicle position and the existing infrastructure perception object set, the position is judged as one of the objects in the infrastructure perception object set or a new object.The object set obtained at the current moment is regarded as the set to be associated matching, and it is matched with the vehicle position obtained at the current moment to distinguish which vehicle is the experimental vehicle in the object set at the current moment.Since the threshold T set by the singlepass algorithm is a fixed value, the coordinates cannot be well matched.In this paper, the single-pass algorithm is improved as follows: the threshold T is weighted according to the vehicle speed obtained at the previous moment, and then a dynamic threshold T is obtained to obtain a better association matching effect.The improved process is shown in Algorithm 2.  The algorithm compares the similarity between the experimental vehicle position and the infrastructure perception object set.If the similarity exceeds the threshold, the match is considered successful.If the final match is unsuccessful, the experimental vehicle position is classified into a new category, indicating that the experimental vehicle is not sensed by the infrastructure, and this position needs to be added to the sensing result as vehicle position information (Table 3).

The Experimental Development
The experiment is carried out on the Yujiatou Campus of Wuhan University of Technology.The LiDAR deployment location and related roads are shown in Fig. 3, and the physical installation of LiDARs is shown in Fig. 4. The vehicle in this experiment is the ITS experimental vehicle of Wuhan University of Technology.The vehicle uses Npos220 micromechanical integrated navigation system equipment produced by The BDStar Navigation, and infrastructure uses Velodyne HLP-16 LiDAR equipment.In this experiment, Velodyne's multi-LiDAR drive is used for time synchronization, and four LiDAR sensors are simultaneously activated through optical fibers and switches.The LiDAR data are collected by a robot operation system (ROS).The coordinate function in the UTexas-Art-ROs-PKG package in the ROS environment is used to realize coordinate transformation to the UTM coordinate system.
The data collection experiment is carried out in steps.The first step is to recruit four drivers, including two males and two females, with an average driving experience of 6 years.The second step, two personnel on each lab, a driver, an experimental equipment operation and recording personnel, the experimental personnel are responsible for collecting data for the normal operation of the onboard equipment.The third step, the installation and collection of roadside equipment, an experimenter is responsible for a roadside node to ensure the normal operation of the equipment and normal data collection.The fourth step of the experiment involves conducting driving experiments 12 times, with each participant driving three times in the designated experimental section.
Since the road section being studied is part of the campus road, it is not possible to achieve complete coverage of the vehicle's entire driving trajectory using LiDAR.Therefore, the analysis focuses on the data collected from the road section that can be covered by LiDAR.In this experiment, a total of 12 test collections are conducted, resulting in the acquisition of both the vehicle information dataset and the infrastructure perception information dataset.After eliminating invalid points, the average collection time for each vehicle is 580 s.Within this experiment, a total of 69,807 records are obtained by sampling the vehicle information dataset and the roadside perception information dataset within a time window of 100 ms.Subsequently, the fusion information dataset is obtained using the object association matching algorithm proposed in this paper.
The original point cloud data of LiDAR after spatial synchronization can obtain target information in every time period after filtering, which can be visualized using RVIZ software in the ROS environment.The multi-LiDAR detecting the object on the test section at the T-junction is shown in Fig. 5.More than 20 targets are detected in this frame, including the test vehicle, which is located at the bottom of the center.These detected objects are output in the form of  a Marker Array in the ROS environment, which contains the distance of the object center point in the X-direction and the distance in the Y-direction in the plane rectangular coordinate system.We select two of these target-rich data sets for data analysis experiments, defined as dataset 1 and dataset 2, respectively.Datasets are shown in Table 4.These two groups of datasets contain vehicle perception information and infrastructure LiDAR perception object information, including the collection duration of each dataset, the number of effective coordinate points in the vehicle information dataset and the number of effective coordinate points in the infrastructure dataset.
The two datasets are matched by the improved single-pass algorithm proposed above.The algorithm needs to set an initial fixed threshold Tp, which is used to judge the similarity between two objects, and calculate the threshold T according to the weight combined with the dynamic speed value.Threshold T is used to judge the similarity between the coordinate information obtained by vehicle integrated navigation and the elements of object coordinate information set C perceived by infrastructure multi-LiDAR.In this experiment, since the vehicle speed on the campus is controlled within 25 km/h, namely, 6.94 m/s, to achieve association matching for the V2I information with as little error as possible, the fixed threshold Tp is set as 0.5 m in this paper, and the point exceeding 0.5 m is obviously the invalid point.The weight A of the speed value is set to 0.1, i.e., a bias of 0-0.694 is added to the estimated threshold.
Thus, the other object information perceived by multi-LiDAR is transmitted to the vehicle to compensate for the surrounding driving environment, especially the blind field of vision.Figure 6 shows the vehicle trajectory after vehicle and infrastructure information fusion in the third test experiment.The vehicle trajectory after fusion is relatively smooth.
The fusion of vehicle perception and roadside perception information is achieved through the Single-Pass algorithm.The algorithm utilizes the vehicle position obtained from the integrated navigation system to locate the corresponding vehicle within the roadside perception object information.By doing so, the other perception objects can be treated as the primary objects for assessing the traffic conditions in the surrounding environment.This approach facilitates the fundamental fusion of vehicle and road information.

Verification of the Results
To assess the accuracy of the improved single-pass algorithm, this paper employs a test dataset consisting of vehicle trajectories obtained from the fusion information dataset.The true values for comparison are derived from the vehicle trajectories obtained from the integrated navigation system dataset.The trajectory information of the experimental vehicle, including the time series, is extracted from the matched data for precise calculation.The results obtained from the improved single-pass algorithm are then compared with those obtained from the Hungarian Algorithm, KM Algorithm, and NN Algorithm.
Figure 7 shows the comparison results of the improved single-pass algorithm proposed in this paper with the Hungarian algorithm, KM algorithm and nearest neighbor algorithm.As seen from the figure, the accuracy of the improved single-pass algorithm proposed in this paper for real trajectory is 0.937, which is 6.60%, 1.85% and 2.07% higher than that of other association matching algorithms.
To further compare the effectiveness of the proposed object association matching algorithm and other object association matching algorithms in vehicle motion state sequences in the The gentler the curvature mean of the trajectory is, the lower the jump degree of the continuous coordinates in the trajectory.
To calculate the curvature of a track point, this paper uses three points to determine the curvature of a conic curve.P = P i X i , Y i |i ∈ (1, … , n) is the set of object trajectory points.N is the number of track points.When calculating the curvature of a certain point P i x i , y i , it is necessary to take the previous point P k x k , y k and the latter point P j x j , y j for cal- culation.For the previous point and the latter point, this paper adopts the sliding time window to carry out the point selection operation.The algorithm smooths the possible jump points in the curvature calculation by setting the time window g and fits the curvature C i of each point.The formula for smoothing points is as follows: where g is the window size and F P k , P i , P j is the function of calculating the curvature of trajectory point P i using tra- jectory point P k and trajectory point P j .
In this paper, the curvature of point P i is calculated using an equation from three known points P k , P i and P j .F is a cur- vature function about three points, and for ease of calculation, these three discrete points need to be fitted into a polynomial of a curve.Then the derivative of the obtained curve is found for point P i and brought into Eq.(3).Then, the curvature of each trajectory point is calculated.The equation is as follows: (2) x �� i , y �� ) i represent the first-order derivative and second-order derivative of the fitted curve at point P i , respectively.
Figure 8 compares the curvature distribution corresponding to the improved single-pass algorithm, Hungarian algorithm, KM algorithm and NN algorithm on test datasets 1 and 2, which are used to characterize the smoothness of the vehicle track.Test dataset 1 and test dataset 2 are obtained by relatively straight driving on a straight road.The results show that the accuracy of the Hungarian algorithm is not high, and the curvature fluctuation of the obtained vehicle trajectory is very large.The curvature fluctuation of the KM and NN algorithms is relatively large, so the error is large.However, the improved single-pass algorithm proposed in this paper has very small and smooth fluctuations on the whole, indicating that the vehicle trajectory curvature is closer to the real value, so this algorithm can more accurately complete object association matching in a V2I system.
Since the magnitude of curvature cannot be completely used as an absolute factor to compare smoothness between tracks, we further compare indices such as curvature mean, variance and standard deviation obtained by various methods.The mean value can compare the difference between the mean values of the curvature of the trajectory and reflect the expected value of the overall curvature of the trajectory.Variance is a measure of the dispersion degree of trajectory curvature as a random variable based on probability theory.The standard deviation reflects the degree of dispersion between curvatures at each point in the trajectory.Table 5 describes the calculation results of the statistical indicators.The results show that the mean value, variance and standard deviation obtained by the improved single-pass algorithm proposed in this paper are 0.0087, 0.0054 and 0.0735, respectively, which are closer to the real value of the vehicle motion state, indicating that the algorithm can effectively calculate the vehicle motion state in a V2I system.
To further illustrate that the fused sensing information is more effective than before fusion, we compare the fused information with the integrated navigation positioning information to obtain the curvature distribution shown in Fig. 9.It is used to illustrate the smoothness of the trajectory corresponding to the vehicle coordinate information we obtained.
The statistical indicators of the mean, variance, standard deviation and coefficient of variation of the curvature of the two methods are shown in Table 6.
The mean, variance, standard deviation and coefficient of variation of the vehicle position information are calculated in the above table as 0.0450, 0.2276, 0.0518, and 0.2417, respectively.The mean, variance, standard deviation and coefficient of variation of the fused information are 0.0028, 0.0554, 0.0031, and 0.0505, respectively.It can be seen that the curvature analysis indexes obtained from the fused information are all better than the curvature indexes of the vehicle position trajectories, and the fused information is more dense, less discrete in curvature and more continuous in trajectory compared with the vehicle information, which is beneficial to the subsequent data analysis.
The vehicle trajectory information is smoother and the accuracy is improved by compensating with the information of roadside sensing objects information.

Conclusions
In the V2I system, vehicles have the capability to acquire abundant sensing data, enabling them to perceive the surrounding environment more accurately and comprehensively.However, the sensing information obtained from different sensors exhibits heterogeneity and multiple modes.
Inconsistent perception information between vehicles and infrastructure, as well as the lack of synchronization in time and space, pose challenges for object association matching.Therefore, this paper focuses on studying the association matching method between vehicle sensing information and roadside perception objects, which holds great significance for the development of autonomous driving.
The main contributions of this paper are as follows: (1) we propose a method for collecting vehicle sensing information through an integrated navigation system.( 2 The results demonstrate that the perception data after object association matching can better reflect the vehicle's motion state and the presence of surrounding moving objects.The proposed method can be applied to vehicle motion behavior recognition and vehicle trajectory prediction in the V2I system.
However, this paper has certain limitations.The accuracy and effectiveness of object matching are also influenced by   the performance of the LiDAR.The use of higher definition and higher performance LiDAR systems can lead to better target matching results.In addition, this study only considers the fusion of navigation and positioning devices with multi-LiDAR on the roadside, without considering the fusion of sensing data from other devices such as cameras.Furthermore, the comparison with other fusion perception methods is limited.Future research will explore alternative perception fusion methods, expand the study to incorporate more sensing devices, and investigate vehicle motion behavior based on the V2I system and decision-making using perception data.

Fig. 1 Fig. 2
Fig.1The main framework structure of the paper

Algorithm 1 :
DBSCAN algorithm of laser point cloud after filtering Input: Current frame point cloud P after filtering Output: Cluster set C 1:

Algorithm 2 :
Improved single-pass object association matching algorithm Input: speed at last moment vt-1, current vehicle position pt={ xt y t }, the object position set of roadside perception C={ c n = (x n yn)| n {1,…, Cn}}, initialized weight a Output: object set after association matching Cnew 1: Initialize the approximate token o to 0 Threshold T=fixed value Tp+a*Vt-1

Fig. 3 Fig. 4
Fig. 3 Schematic diagram of the experimental test road

Fig. 7 Fig. 8
Fig. 7 Comparison of the accuracy of each algorithm ) The DBSCAN algorithm is employed to cluster the filtered point cloud data obtained from multiple Lidars deployed on the road, allowing for the extraction of dynamic object information in the driving environment.(3) An improved single-pass algorithm is introduced to achieve the fusion of V2I perception information, facilitating the association and matching of vehicles and infrastructure perception objects.(4) The effectiveness of the algorithm is verified through experiments.

Fig. 9
Fig. 9 Curvature of partial integrated navigation positioning information and the fusion information

Table 1
The main fields of the vehicle dataset

Table 5
Comparison of curvature statistical indices of different algorithms

Table 6
Comparison of curvature statistical indices of the fused information and the integrated navigation positioning information