Abstract
Ego Vehicle state estimation is integral to every autonomous driving software stack. Thereby, the estimation of the state and its components as for example the side slip angle, is a crucial component to track the vehicle maneuvers. In the absence of a direct sensor measuring side slip angle, most of the existing literature either use observers like Kalman Filters or non-modular factor graphs by modeling lateral dynamics. However, the modularity of such graphs, to integrate multiple asynchronous sensors that provide disentangled measurements, like LiDAR, GNSS, and IMU is still overlooked in the literature. In this work, we propose a novel factor graph-based architecture that builds upon the vehicle dynamics at its core to enable the fusion of multiple sensors asynchronously and enables to perform robust and accurate state estimation.
We validate the proposed algorithm against two baselines, a model-based Extended Kalman Filter and a factor graph-based state estimator that uses the IMU pre-integration factor as a reference factor. The algorithms are validated in a custom dataset collected using an in-house vehicle.
You have full access to this open access chapter, Download conference paper PDF
Keywords
1 Introduction
Any robotic application like Autonomous Driving requires knowledge of its location, i.e. position, orientation, velocity, etc. to provide autonomous functionality. The measurements obtained from different sensors cannot be directly interpreted as the vehicle state due to various challenges including, noisy measurements, asynchronous sensor readings, discrepancies in the sensor measurement format compared to state representation, etc. Hence, the state is estimated using the measurements obtained from different sensors. In literature, these state estimation algorithms can be broadly categorized into filtering-based built upon the Kalman Filter and its different variants: Extended Kalman Filter (EKF) [2, 8], Unscented Kalman Filter [1, 8], and Graph-based methods either using a pose graph [12] or a factor graph [4, 5]. While Filtering-based approaches are primarily employed for state estimation, graph-based approaches are the primary choice for Simultaneous Localization and Mapping (SLAM). With the advancement of learning-based algorithms, some works in literature use deep learning techniques to perform state estimation. [9,10,11, 14]. Even though deep learning-based algorithms provide reliable solutions, they are often constrained to the systems they are trained for due to measurable domain gaps and require the ground truth data to optimize the algorithm.
The representation of the Vehicle state varies across different works, with each adopting distinct formats and modeling approaches for their components. Among these components, side slip angle estimation proves to be one of the most challenging, particularly within the context of modeling vehicle lateral dynamics. The absence of a dedicated sensor for direct side slip angle measurement has prompted the exploration of various techniques in the literature, often involving GNSS and optical sensors. However, the reliability of GNSS sensors in urban canyons and the cost associated with optical sensors present significant challenges. The UKF filtering approach in [1] models the vehicle motion using the single-track vehicle kinematic model and doesn’t explicitly compute the side slip angle. [2] address this by including vehicle side slip angle in state representation and modeling vehicle motion with a single-track version of the four-wheel vehicle model, however, the filtering-based approach is not modular and hence not designed to include additional sensors into the framework. The authors in [3] propose a state estimator based on a factor graph. This estimator models the vehicle’s lateral dynamics using a single-track vehicle dynamics model. Nevertheless, it is noteworthy that this research is confined to representing only the vehicle’s lateral dynamics. Unfortunately, this limitation restricts the incorporation of additional sensors such as the Inertial Measurement Unit (IMU), GNSS, LiDAR, Cameras, and others into the broader state estimation framework. Our earlier work, [4] is similar to the proposed architecture, but only tests the inclusion of the vehicle kinematic factor into the factor graph without explicitly modeling vehicle lateral dynamics.
In this work, we propose a factor graph-based state estimation architecture that takes the single-track vehicle dynamics model as the reference factor and integrates GNSS measurements and LiDAR odometry for vehicle state estimation. We demonstrate the superiority of the proposed algorithm in terms of modeling vehicle lateral dynamics compared to the baselines and its modularity in integrating multiple asynchronous external sensors enabling robust state estimation in GNSS outage scenarios. The major contribution of the proposed framework is threefold:
-
We propose a novel and modular factor graph-based state estimation framework that can utilize measurements from IMU, LiDAR, GNSS, and In-Vehicle sensors (i.e. velocity and steering encoders) to provide consistent and reliable state estimation.
-
Filter-like implementation of the architecture enables real-time state estimation at high frequency. The frequency can be regulated to meet user’s need.
-
We demonstrate that the proposed algorithm can provide robust and accurate lateral dynamics state estimate when compared to the IMU preintegration based factor graph [4] and baseline EKF [2].
2 Problem Formulation
For state representation, we take the position of the IMU as the reference point and assign it as the origin of the Vehicle Reference Frame (V). \({X}_t\) represents the state or values for the factor graph,
where, [\( _Ix^G\), \( _Iy^G\)] and \( _I\psi ^G\) are the IMU’s position and orientation in the Global Reference Frame (G). \(V_x^I, \beta ^I\), and \(\omega ^I\) are longitudinal velocity, side slip angle and yaw rate in IMU frame. The subindex t corresponds to the time instance.
If we represent \({\textbf {X}}^G_t\) as a set of the factor graph state till time instance t, the algorithm needs to estimate the posterior distribution \(p({\textbf {X}}^G_t|{\textbf {Z}}_t)\). Where \({\textbf {Z}}_t\) represents all the measurements from sensors till time instance t. Estimation can be performed using the Maximum A Posteriori (MAP) inference, for \({\textbf {X}}^G_t\), which is represented as:
while we can approximate the posterior \(p({\textbf {X}}^G_t|{\textbf {Z}}_t)\) using the Baye’s law as,
where, \(p({\textbf {Z}}_t|{\textbf {X}}_t^G)\) is the measurement likelihood density and \(p({\textbf {X}}_0)\) is the prior of the states. Given the measurements obtained from the LiDAR in the point cloud format, linear acceleration, and angular velocities measurements from IMU, global position measurements in the latitude, longitude, and altitude format from the GNSS sensor, steering angle measurement from the steering encoder and longitudinal velocity measurement from the velocity encoder, the algorithm provides the estimates of \({\textbf {X}}_t^G\) at time instance t by solving Eq. (3) for the respective states.
The overall architecture of the algorithm is illustrated in Fig. 1 and detailed representation is provided in Fig. 2. The factor graph is designed to accommodate the vehicle dynamics factor as a base factor and add additional sensor measurements as per their availability.
2.1 Factor Graph
A factor graph is constructed by taking the synchronized In-Vehicle sensor’s measurements as the reference points in a temporal sense. These sensors have the highest frequency in most sensor setups and hence can provide connection points to other sensors with a high degree of flexibility. Using the Robot Operating System (ROS) framework, the factor graph is constructed by taking their timestamps as the reference points, and new values are added to the graph concurrently. The modular nature of the graph enables the addition and removal of sensors based on their availability. The sensor setup consists of 2 GNSS Sensors, located at the front and the rear of the vehicle operating at 10 Hz, one Hesai 32 plane LiDAR operating at 20 Hz, one Bosch 5-axis IMU, and In-Vehicle sensors, all operating at 100 Hz.
Filter Like Architecture. The posterior distribution in Eq. 3 is proportional to the product of a prior distribution \(p({\textbf {X}}_0)\) and the measurement likelihood \(p({\textbf {Z}}_t|{\textbf {X}}_t^G)\). This allows for an iterative solution to the MAP problem by moving the optimization horizon into a computationally tractable threshold [5]. The MAP inference referred to by Eq. 2 is solved by using factor graph [13] in this work. A factor graph is a bipartite graph, \(\mathcal {G} = (\mathcal {V},\mathcal {F})\) with \(\mathcal {V}\) being the set of values, \(\mathcal {F}\) being the set of factors. For a given factor graph, MAP inference in Eq. 2 can be rewritten as a product of multiple factors:
where the subscript i refers to the factor number. Each factor can be a result of measurement or constraint received from different sensors. Assuming that each factor is of a type,
measurements from different sensors can be easily integrated into the factor graph if the measurement model given by \(h^i(X^i)\) can be defined. The MAP inference problem can be modeled as a least square optimization problem with the residual error given by \(e^i = h^i(X^i)-z^i\). \(z^i\) is a measurement received from a sensor. The measurements are assumed to be normally distributed with zero mean and \(\varSigma _i\) covariance. This factorized implementation facilitates the modular architecture of the factor graph-based algorithm and hence enables sensor fusion for measurements coming from different sensors with different modalities.
In this work, we implement a filter-like factor graph-based state estimator taking inspiration from the work in [5]. Given that In-Vehicle sensors are the sensor with the highest frequency in most vehicles, its timestamp is used to add values to the factor graph. Having established a factor graph with reference values at high frequency, a variety of sensors can be easily integrated. Most control algorithms require the state to be estimated at a high frequency for smooth operation, however, LiDAR odometry and GNSS measurements are usually available at a lower frequency of (10–20) Hz. This filter-like implementation is designed using predict-optimize iterations to provide state estimates at In-Vehicle sensors’ frequency. With each In-Vehicle sensor measurement, the state is predicted to the next time instance using single-track vehicle dynamics model, and once a LiDAR or GNSS factor is added to the graph, it is optimized to obtain an updated or optimized state.
2.2 Vehicle Dynamics Factor
To construct the factor graph within the framework, the vehicle dynamics are incorporated into residual error functions aimed at minimizing through factor graph optimization. Equation 6 represents the residual errors pertaining to vehicle dynamics and connects two states or values, denoted as \(X^k\) and \(X^{k-1}\) within the graph. Here, \(v_y\), \(v_x\), and \(\omega _z\) are the lateral, longitudinal, and angular velocities of the vehicle, respectively. Additionally, \(C_f\) and \(C_r\) denote the cornering stiffness of the front and rear tires, while \(l_f\) and \(l_r\) represent the distances from the vehicle’s center of gravity to the front and rear axles, respectively. The steering angle \(\delta \) and longitudinal acceleration \(a_x\) are measured from the steering encoder and IMU. A single dynamic factor is established by linearizing Eq. 6. In the factor graph, values are added at the frequency of the In-Vehicle sensors, and two consecutive values are linked by factors, as depicted in Eq. 6.
2.3 Measurements Factor
The measurement factor incorporates lateral acceleration \(\hat{a}_y^k\) and yaw rate \(\hat{\omega _z}\) measurements from IMU. It also includes longitudinal velocity measurement, \(\hat{v_x}\) from the velocity encoder. This integration is expressed through the residual of these measurements connected with the value at time instance k, denoted as \(X^k\), as illustrated in Eq. 7. Additionally, the steering angle at time instance k, \(\delta ^k\) is also considered in the formulation.
2.4 GNSS and LiDAR Factors
GNSS sensor provides the location of the vehicle in the frame G. The sensor setup has two RTK-corrected GNSS sensors located at the front and rear of the vehicle and hence can also provide information on the vehicle’s global yaw and pitch angle. The GNSS measurements are transformed into the IMU reference point of the vehicle and are subsequently added to the factor graph as a Unary factor using the known static transformation between the GNSS sensor and the IMU sensor.
where \( ^{I}\hat{p}^{G}\) is the GNSS measurement transformed to the IMU location in the vehicle expressed in the G frame. \( _{GN}R^I\) and \( _{GN}p^I\) represent the rotation and translation component of the static transformation between IMU and the GNSS sensor.
To compute LiDAR Odometry we employ a lightly coupled odometry computer based on the work of [6]. In this approach, the odometry computer directly subscribes to the point cloud topics obtained from the LiDAR sensor and is used as a binary factor between two values corresponding to the timestamps of the consecutive LiDAR scan. The relative transformation between the two consecutive point clouds is used as the factor between the values in the graph closest to these timestamps.
3 Implementation
We design the factor graph and perform state estimation using the Fixed Lag Smoother in the GTSAM [7] framework. We use the ROS framework to establish callbacks to acquire sensor measurements and test real-time applicability. The proposed algorithm is validated against two model-based state estimators: an EKF [2] and an IMU preintegration-based factor graph [4]. We acquire test data with difficult maneuvers to demonstrate the algorithm’s ability to model vehicle lateral dynamics and its modularity to integrate arbitrary other sensors in the state estimation framework. We use a Datron optical sensor to measure the vehicle’s longitudinal and lateral velocities and validate the estimated state against it in complex maneuvers.
4 Results
A series of slalom maneuvers is performed to create the first scene. The trajectory of the vehicle’s motion along with the states estimated by the proposed algorithm and the baselines are illustrated by Fig. 3. Position estimation from the proposed algorithm and the baselines are very similar as they both use the RTK-corrected GNSS sensors, however, a significant improvement can be seen in the estimation of the side slip angle. We can observe that the side slip angle estimation from the proposed factor graph that utilizes a vehicle dynamics model for motion prediction matches closely to the estimation from the EKF and the Datron side slip computation, while estimation from the IMU factor graph is significantly worse in maneuver end situations.
In the second scenario, we use two scenes as shown in Fig. 4, to validate the algorithm, where we investigate a GNSS failure case. We eliminate the availability of both GNSS measurements to the algorithm after a certain time instance from the algorithm’s initiation and analyze its performance in terms of state estimation. This same procedure is applied to the baseline algorithms. Note, for the baseline EKF, the IMU and In-Vehicle sensors’ measurements are made consistently available and update respective states even in the absence of GNSS measurements in the failure scenario. Ground truth for position and yaw angle is computed using two RTK-corrected GNSS positions. Notably, since GNSS positions are unavailable from 5 s onward for Scene 1 and 50 s onward for Scene 2 into the algorithm run, the unused GNSS measurement serves as a reliable ground truth reference. This further emphasizes the robust performance of the proposed dynamics graph under GNSS failure conditions. The qualitative analysis presented in Fig. 4a reveals that the proposed algorithm consistently outperforms both the EKF [2] and the IMU factor graph [4] in terms of vehicle position estimation. Moreover, the estimation of the side slip angle from the proposed graph appears to be more accurate compared to the IMU graph and EKF. A similar level of performance is observed in the GNSS failure scenario in Scene 2, illustrated by Fig. 4b. The quantitative results are depicted in Table 1, supporting the findings and show that the proposed dynamics graph consistently outperforms both baseline algorithms in positional and yaw angle estimation. In total, we can observe up to \(60\%\) and \(23\%\) performance improvement on average, in terms of positional and yaw angle estimation respectively compared to the IMU graph.
5 Conclusion
We present a novel factor graph-based state estimation algorithm. The algorithm uses a vehicle dynamic factors to establish a base factor graph and integrates multiple asynchronous sensors like LiDAR, GNSS, etc. to provide consistent and accurate state estimation at high frequency. We validate the algorithm using data collected by an in-house vehicle with the defined sensor setup and multiple scenarios, where we simulate the GNSS sensor failure to check the reliability of the algorithm in urban settings. We were able to demonstrate that the proposed algorithm is highly modular and can seamlessly integrate multiple asynchronous sensors while providing reliable and robust state estimates compared to the baselines.
References
Bersani, M., et al.: An integrated algorithm for ego-vehicle and obstacles state estimation for autonomous driving. Robot. Auton. Syst. 139, 103662 (2021)
Prakash, J., Vignati, M., Sabbioni, E.: Vehicle teleoperation: performance assessment of SRPT approach under state estimation errors, 2023
Leanza, A., Reina, G., Blanco-Claraco, J.-L.: A factor- graph-based approach to vehicle sideslip angle estimation. Sensors 21(16) (2021). https://www.mdpi.com/1424-8220/21/16/54093
Dahal, P., Prakash, J., Arrigoni, S., Braghinl, F.: Vehicle state estimation through modular factor graph-based fusion of multiple sensors. In: IEEE Vehicle Power and Propulsion Conference (VPPC). Milan, Italy 2023, pp. 1–6 (2023). https://doi.org/10.1109/VPPC60535.2023.10403196
Nubert, J., Khattak, S., Hutter, M.: Graph-based multi-sensor fusion for consistent localization of autonomous construction robots. In: 2022 IEEE International Conference on Robotics and Automation (ICRA). IEEE (2022)
Vizzo, I., Guadagnino, T., Mersch, B., Wiesmann, L., Behley, J., Stachniss, C.: KISS-ICP: in defense of point-to-point ICP - simple, accurate, and robust registration if done the right way. In: IEEE Robotics and Automation Letters (RA-L), vol. 8, no. 2, pp. 1029–1036 (2023)
Dellaert, F., Contributors, G.: “Borglab/ch1gtsam,” May 2022. https://github.com/borglab/gtsam)
Bersani, M., Vignati, M., Mentasti, S., Arrigoni, S., Cheli, F.: Vehicle state estimation based on Kalman filters. In: 2019 AEIT International Conference of Electrical and Electronic Technologies for Automotive (2019)
Kong, D., et al.: Vehicle lateral velocity estimation based on long short-term memory network. World Electr. Veh. J. 13(1) (2021)
Revach, G., Shlezinger, N., Ni, X., Escoriza, A.L., van Sloun, R.J.G., Eldar, Y.C.: KalmanNet: neural network aided Kalman filtering for partially known dynamics. IEEE Trans. Signal Process. 70, 1532–1547 (2022)
Dahal, P., Mentasti, S., Paparusso, L.., Arrigoni, S., Braghin, F.: Robuststatenet: robust ego vehicle state estimation for autonomous driving. Robot. Auton. Syst. 104585 (2023). https://www.sciencedirect.com/science/article/pii/S0921889023002245
Frosi, M., Matteucci, M.: ART-SLAM: accurate real-time 6dof lidar slam, 2021
Dellaert, F., Kaess, M.: Factor graphs for robot perception. Found. Trends Robot. 6 (2017). http://www.cs.cmu.edu/kaess/pub/Dellaert17fnt.pdf
Chindamo, D., Gadola, M.: Estimation of vehicle side-slip angle using an artificial neural network. In: MATEC Web of Conferences, vol. 166, p. 02001, January 2018
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this paper
Cite this paper
Dahal, P., Arrigoni, S., Bijelic, M., Braghin, F. (2024). Vehicle State Estimation Through Dynamics Modeled Factor Graph. In: Mastinu, G., Braghin, F., Cheli, F., Corno, M., Savaresi, S.M. (eds) 16th International Symposium on Advanced Vehicle Control. AVEC 2024. Lecture Notes in Mechanical Engineering. Springer, Cham. https://doi.org/10.1007/978-3-031-70392-8_119
Download citation
DOI: https://doi.org/10.1007/978-3-031-70392-8_119
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-70391-1
Online ISBN: 978-3-031-70392-8
eBook Packages: EngineeringEngineering (R0)