Abstract
This paper proposes an integrated navigation algorithm based on the deep learning method using lidar and inertial measurements. The proposed method develops a new synthetic structure of neural networks for implementing the Lidar-inertial odometry to generate a 6 degree of freedom pose estimation. The proposed network consists of component neural networks that reflect each sensor’s characteristics, then an integrating network for combining estimates from heterogeneous sensors at the terminal stage. To secure an efficient estimation performance, a compound loss function design is exploited. The performance of the proposed deep learning-based LIO algorithm was verified through artificially generated data sets based on a high fidelity dynamics simulator. Instead of using the well-known reference data set of ground vehicles, the employed data set reflects the full 3D dynamic characteristics of the drone as well as low-cost sensor characteristics considering onboard implementation. Through the flight simulator data set, the estimation performance of the proposed synthetic network was demonstrated.
Similar content being viewed by others
References
Y. Chen and G. Medioni, “Object modeling by registration of multiple range images,” Image Vision Computing, vol. 10, no. 3, pp. 145–155, 1992.
J. Serafin and G. Grisetti, “NICP: Dense normal based point cloud registration,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, pp. 742–749, 2015.
A. V. Segal, D. Haehnel, and S. Thrun, “Generalized-ICP,” Proc. of Robotics: Science and Systems (RSS), Seattle, WA, USA, 2009.
J. Behley and C. Stachniss, “Efficient surfel-based SLAM using 3D laser range data in urban environments,” Proc. of Robotics: Science and Systems (RSS), Pittsburg, PA, USA, 2018.
E. Mendes, P. Koch, and S. Lacroix, “ICP-based pose-graph SLAM,” Proc. of IEEE International Symposium on Safety, Security, and Rescue Robotics, Lausanne, Switzerland, pp. 195–200, 2016.
J. Zhang and S. Singh, “LOAM: Lidar odometry mapping in real-time,” Proc. of Robotics: Science and Systems(RSS), Berkeley, CA, USA, 2014.
M. Velas, M. Spanel, and A. Herout, “Collar line segments for fast odometry estimation from Velodyne point clouds,” Proc. of IEEE International Conference on Robotics and Automation, Stockholm, Sweden, pp. 4486–4495, 2016.
T. Zhou, M. Brow, N. Snavely, and D. G. Lowe, “Unsupervised learning of depth and ego-motion from video,” Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, pp. 1851–1858, 2017.
R. Li, S. Wang, Z. Long, and D. Gu, “UnDeepVO: Monocular visual odometry through unsupervised deep learning,” Proc. of IEEE International Conference on Robotics and Automation, Brisbane, QLD, Australia, pp. 7286–7291, 2018.
E. J. Shamwell, S. Leung, and W. D. Nothwang, “Vision-aided absolute trajectory estimation using an unsupervised deep network with online error correction,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, Madrid, Spain, pp. 2524–2531, 2019.
H. Zhan, R. Garg, C. S. Weerasekera, K. Li, H. Agarwal, and I. Reid, “Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction,” Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp. 340–349, 2018.
Q. Li, S. Chen, C. Wang, X. Li, C. Wen, M. Cheng, and J. Li, “LO-Net: Deep real-time lidar odometry,” Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, pp. 8473–8482, 2019.
Y. Cho, G. Kim, and A. Kim, “DeepLO: Geometry-aware deep LiDAR odometry,” arXiv preprint arXiv:1902.10562, 2019.
M. Velas, M. Spanel, M. Hradis, and A. Herout, “CNN for IMU assisted odometry estimation using Velodyne Li-DAR,” Proc. of IEEE International Conference on Autonomous Robot Systems and Competitions, Torres Vedras, Portugal, pp. 71–77, 2018.
R. Clark, S. Wang, H. Wen, A. Markham, and N. Trigoni, “VINet: Visual-inertial odometry as a sequence-to-sequence learning problem,” Proc. of 31st AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 2017.
F. Moosmann, Interlacing Self-localization, Moving Object Tracking and Mapping for 3D Range Sensors, KIT Scientific Publishing, 2013.
C. Jekeli, Inertial Navigation Systems with Geodetic Applications, Walter de Gruyter, Berlin, Germany, 2001.
B. Wu, A. Wan, X. Yue, and K. Keutzer, “SqueezeSeg: Convolutional neural nets with recurrent CRF for real-time road-object segmentation from 3D LiDAR point cloud,” Proc. of IEEE International Conference on Robotics and Automation, Brisbane, QLD, Austraila, pp. 1887–1893, 2018.
F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5MB model size,” arXiv preprint arXiv: 1602.07360, 2016.
S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, Nov. 1997.
A. Kendall and R. Cipolla, “Geometric loss functions for camera pose regression with deep learning,” Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, pp. 5974–5983, 2017.
M. Abadi, et al., “TensorFlow: A system for large-scale machine learning,” Proc. of 12th {USENIX} Symposium on Operating Systems Design and Implementation, pp. 265–283, 2016.
D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
A. Geiger, P. Lenz, and R. Urtasun, “Vision meets robotics: The KITTI dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, Aug. 2013.
A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? The KITTI vision benchmark suite,” Proc. of IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, pp. 3354–3361, 2012.
E. Koh, G. Park, B. Lee, D. Kim, and S. Sung, “Performance validation and comparison of range/INS integrated system in urban navigation environment using Unity 3D and PILS,” Proc. of IEEE/ION Position, Location and Navigation Symposium (PLANS), Portland, OR, USA, pp. 788–792, Apr. 20–23, 2020.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This journal was supported by the Korean National Research Fund(NRF-2019R1A2B5B01069412, NRF-2020M3C1C1A01086408) and the Information Technology Research Center support program(IITP-2021-2018-0-01423) supervised by the IITP.
Hyunjin Son received her M.S. degree at the Department of Aerospace Information Engineering, Konkuk University, Korea in 2020. Her research interests include the development of navigation system and deep learning.
Byungjin Lee received his Ph.D. degree at the Department of Aerospace Information Engineering, Konkuk University, Korea in 2017. Now, he works as a research professor at Konkuk University. His research interests include the development of navigation and control system for unmanned vehicles.
Sangkyung Sung is a Professor at the Department of Aerospace Information Engineering, Konkuk University, Korea. His research interests include inertial sensors, integrated and seamless navigation, and application to mechatronics and unmanned intelligent systems.
Rights and permissions
About this article
Cite this article
Son, H., Lee, B. & Sung, S. Synthetic Deep Neural Network Design for Lidar-inertial Odometry Based on CNN and LSTM. Int. J. Control Autom. Syst. 19, 2859–2868 (2021). https://doi.org/10.1007/s12555-020-0443-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12555-020-0443-2