Skip to main content
Log in

Synthetic Deep Neural Network Design for Lidar-inertial Odometry Based on CNN and LSTM

  • Regular Papers
  • Robot and Applications
  • Published:
International Journal of Control, Automation and Systems Aims and scope Submit manuscript

Abstract

This paper proposes an integrated navigation algorithm based on the deep learning method using lidar and inertial measurements. The proposed method develops a new synthetic structure of neural networks for implementing the Lidar-inertial odometry to generate a 6 degree of freedom pose estimation. The proposed network consists of component neural networks that reflect each sensor’s characteristics, then an integrating network for combining estimates from heterogeneous sensors at the terminal stage. To secure an efficient estimation performance, a compound loss function design is exploited. The performance of the proposed deep learning-based LIO algorithm was verified through artificially generated data sets based on a high fidelity dynamics simulator. Instead of using the well-known reference data set of ground vehicles, the employed data set reflects the full 3D dynamic characteristics of the drone as well as low-cost sensor characteristics considering onboard implementation. Through the flight simulator data set, the estimation performance of the proposed synthetic network was demonstrated.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Y. Chen and G. Medioni, “Object modeling by registration of multiple range images,” Image Vision Computing, vol. 10, no. 3, pp. 145–155, 1992.

    Article  Google Scholar 

  2. J. Serafin and G. Grisetti, “NICP: Dense normal based point cloud registration,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, pp. 742–749, 2015.

    Google Scholar 

  3. A. V. Segal, D. Haehnel, and S. Thrun, “Generalized-ICP,” Proc. of Robotics: Science and Systems (RSS), Seattle, WA, USA, 2009.

    Google Scholar 

  4. J. Behley and C. Stachniss, “Efficient surfel-based SLAM using 3D laser range data in urban environments,” Proc. of Robotics: Science and Systems (RSS), Pittsburg, PA, USA, 2018.

    Google Scholar 

  5. E. Mendes, P. Koch, and S. Lacroix, “ICP-based pose-graph SLAM,” Proc. of IEEE International Symposium on Safety, Security, and Rescue Robotics, Lausanne, Switzerland, pp. 195–200, 2016.

    Google Scholar 

  6. J. Zhang and S. Singh, “LOAM: Lidar odometry mapping in real-time,” Proc. of Robotics: Science and Systems(RSS), Berkeley, CA, USA, 2014.

    Google Scholar 

  7. M. Velas, M. Spanel, and A. Herout, “Collar line segments for fast odometry estimation from Velodyne point clouds,” Proc. of IEEE International Conference on Robotics and Automation, Stockholm, Sweden, pp. 4486–4495, 2016.

    Google Scholar 

  8. T. Zhou, M. Brow, N. Snavely, and D. G. Lowe, “Unsupervised learning of depth and ego-motion from video,” Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, pp. 1851–1858, 2017.

    Google Scholar 

  9. R. Li, S. Wang, Z. Long, and D. Gu, “UnDeepVO: Monocular visual odometry through unsupervised deep learning,” Proc. of IEEE International Conference on Robotics and Automation, Brisbane, QLD, Australia, pp. 7286–7291, 2018.

    Google Scholar 

  10. E. J. Shamwell, S. Leung, and W. D. Nothwang, “Vision-aided absolute trajectory estimation using an unsupervised deep network with online error correction,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, Madrid, Spain, pp. 2524–2531, 2019.

    Google Scholar 

  11. H. Zhan, R. Garg, C. S. Weerasekera, K. Li, H. Agarwal, and I. Reid, “Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction,” Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp. 340–349, 2018.

    Google Scholar 

  12. Q. Li, S. Chen, C. Wang, X. Li, C. Wen, M. Cheng, and J. Li, “LO-Net: Deep real-time lidar odometry,” Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, pp. 8473–8482, 2019.

    Google Scholar 

  13. Y. Cho, G. Kim, and A. Kim, “DeepLO: Geometry-aware deep LiDAR odometry,” arXiv preprint arXiv:1902.10562, 2019.

    Google Scholar 

  14. M. Velas, M. Spanel, M. Hradis, and A. Herout, “CNN for IMU assisted odometry estimation using Velodyne Li-DAR,” Proc. of IEEE International Conference on Autonomous Robot Systems and Competitions, Torres Vedras, Portugal, pp. 71–77, 2018.

    Google Scholar 

  15. R. Clark, S. Wang, H. Wen, A. Markham, and N. Trigoni, “VINet: Visual-inertial odometry as a sequence-to-sequence learning problem,” Proc. of 31st AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 2017.

    Google Scholar 

  16. F. Moosmann, Interlacing Self-localization, Moving Object Tracking and Mapping for 3D Range Sensors, KIT Scientific Publishing, 2013.

    Google Scholar 

  17. C. Jekeli, Inertial Navigation Systems with Geodetic Applications, Walter de Gruyter, Berlin, Germany, 2001.

    Book  Google Scholar 

  18. B. Wu, A. Wan, X. Yue, and K. Keutzer, “SqueezeSeg: Convolutional neural nets with recurrent CRF for real-time road-object segmentation from 3D LiDAR point cloud,” Proc. of IEEE International Conference on Robotics and Automation, Brisbane, QLD, Austraila, pp. 1887–1893, 2018.

    Google Scholar 

  19. F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5MB model size,” arXiv preprint arXiv: 1602.07360, 2016.

    Google Scholar 

  20. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, Nov. 1997.

    Article  Google Scholar 

  21. A. Kendall and R. Cipolla, “Geometric loss functions for camera pose regression with deep learning,” Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, pp. 5974–5983, 2017.

    Google Scholar 

  22. M. Abadi, et al., “TensorFlow: A system for large-scale machine learning,” Proc. of 12th {USENIX} Symposium on Operating Systems Design and Implementation, pp. 265–283, 2016.

    Google Scholar 

  23. D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.

    Google Scholar 

  24. A. Geiger, P. Lenz, and R. Urtasun, “Vision meets robotics: The KITTI dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, Aug. 2013.

    Article  Google Scholar 

  25. A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? The KITTI vision benchmark suite,” Proc. of IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, pp. 3354–3361, 2012.

    Google Scholar 

  26. E. Koh, G. Park, B. Lee, D. Kim, and S. Sung, “Performance validation and comparison of range/INS integrated system in urban navigation environment using Unity 3D and PILS,” Proc. of IEEE/ION Position, Location and Navigation Symposium (PLANS), Portland, OR, USA, pp. 788–792, Apr. 20–23, 2020.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sangkyung Sung.

Additional information

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This journal was supported by the Korean National Research Fund(NRF-2019R1A2B5B01069412, NRF-2020M3C1C1A01086408) and the Information Technology Research Center support program(IITP-2021-2018-0-01423) supervised by the IITP.

Hyunjin Son received her M.S. degree at the Department of Aerospace Information Engineering, Konkuk University, Korea in 2020. Her research interests include the development of navigation system and deep learning.

Byungjin Lee received his Ph.D. degree at the Department of Aerospace Information Engineering, Konkuk University, Korea in 2017. Now, he works as a research professor at Konkuk University. His research interests include the development of navigation and control system for unmanned vehicles.

Sangkyung Sung is a Professor at the Department of Aerospace Information Engineering, Konkuk University, Korea. His research interests include inertial sensors, integrated and seamless navigation, and application to mechatronics and unmanned intelligent systems.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Son, H., Lee, B. & Sung, S. Synthetic Deep Neural Network Design for Lidar-inertial Odometry Based on CNN and LSTM. Int. J. Control Autom. Syst. 19, 2859–2868 (2021). https://doi.org/10.1007/s12555-020-0443-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12555-020-0443-2

Keywords

Navigation