Advertisement

Lidar-Monocular Visual Odometry with Genetic Algorithm for Parameter Optimization

  • Adarsh Sehgal
  • Ashutosh Singandhupe
  • Hung Manh LaEmail author
  • Alireza Tavakkoli
  • Sushil J. Louis
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11845)

Abstract

Lidar-Monocular Visual Odometry (LIMO), an odometry estimation algorithm, combines camera and LIght Detection And Ranging sensor (LIDAR) for visual localization by tracking features from camera images and LIDAR measurements. LIMO then estimates the motion using Bundle Adjustment based on robust key frames. LIMO uses semantic labelling and weights of the vegetation landmarks for rejecting outliers. A drawback of LIMO as well as many other odometry estimation algorithms is that it has many parameters that need to be manually adjusted according to dynamic changes in the environment in order to decrease translational errors. In this paper, we present and argue the use of Genetic Algorithm (GA) to optimize parameters with reference to LIMO and maximize LIMO’s localization and motion estimation performance. We evaluate our approach on the well-known KITTI odometry dataset and show that the GA helps LIMO to reduce translation error in different datasets.

Keywords

LIMO Genetic Algorithm Sensor odometry 

References

  1. 1.
    Balazadegan Sarvrood, Y., Hosseinyalamdary, S., Gao, Y.: Visual-lidar odometry aided by reduced IMU. ISPRS Int. J. Geo-Inf. 5(1), 3 (2016)CrossRefGoogle Scholar
  2. 2.
    Buczko, M., Willert, V.: Flow-decoupled normalized reprojection error for visual odometry. In: 19th IEEE International Conference on Intelligent Transportation Systems, pp. 1161–1167 (2016)Google Scholar
  3. 3.
    Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3213–3223 (2016)Google Scholar
  4. 4.
    Cvišić, I., Petrović, I.: Stereo odometry based on careful feature selection and tracking. In: European Conference on Mobile Robots, pp. 1–6 (2015)Google Scholar
  5. 5.
    Duckett, T.: A genetic algorithm for simultaneous localization and mapping. In: IEEE International Conference on Robotics and Automation, pp. 434–439, September 2003Google Scholar
  6. 6.
    Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the kitti dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)CrossRefGoogle Scholar
  7. 7.
    Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: Conference on Computer Vision and Pattern Recognition (2012)Google Scholar
  8. 8.
    Geiger, A., Moosmann, F., Car, Ö., Schuster, B.: Automatic camera and range sensor calibration using a single shot. In: IEEE International Conference on Robotics and Automation, pp. 3936–3943 (2012)Google Scholar
  9. 9.
    Geiger, A., Ziegler, J., Stiller, C.: Stereoscan: dense 3D reconstruction in real-time. In: IEEE Intelligent Vehicles Symposium, pp. 963–968 (2011)Google Scholar
  10. 10.
    Gibb, S., La, H.M., Louis, S.: A genetic algorithm for convolutional network structure optimization for concrete crack detection. In: 2018 IEEE Congress on Evolutionary Computation (CEC), pp. 1–8 (2018)Google Scholar
  11. 11.
    Goldberg, D.E., Deb, K.: A comparative analysis of selection schemes used in genetic algorithms. In: Foundations of Genetic Algorithms, vol. 1, pp. 69–93 (1991)Google Scholar
  12. 12.
    Goldberg, D.E., Holland, J.H.: Genetic algorithms and machine learning. Mach. Learn. 3(2), 95–99 (1988)CrossRefGoogle Scholar
  13. 13.
    Graeter, J., Wilczynski, A., Lauer, M.: LIMO: lidar-monocular visual odometry. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 7872–7879 (2018)Google Scholar
  14. 14.
    Gräter, J., Strauss, T., Lauer, M.: Photometric laser scanner to camera calibration for low resolution sensors. In: 19th IEEE International Conference on Intelligent Transportation Systems (ITSC), pp. 1552–1557 (2016)Google Scholar
  15. 15.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge university press, Cambridge (2003)Google Scholar
  16. 16.
    La, H.M., Nguyen, T.H., Nguyen, C.H., Nguyen, H.N.: Optimal flocking control for a mobile sensor network based a moving target tracking. In: IEEE International Conference on Systems, Man and Cybernetics, pp. 4801–4806, October 2009Google Scholar
  17. 17.
    Sehgal, A., La, H., Louis, S., Nguyen, H.: Deep reinforcement learning using genetic algorithm for parameter optimization. In: IEEE International Conference on Robotic Computing, pp. 596–601 (2019)Google Scholar
  18. 18.
    Singandhupe, A., La, H.: A review of SLAM techniques and security in autonomous driving. In: IEEE International Conference on Robotic Computing, pp. 602–607 (2019)Google Scholar
  19. 19.
    Sons, M., Lategahn, H., Keller, C.G., Stiller, C.: Multi trajectory pose adjustment for life-long mapping. In: 2015 IEEE Intelligent Vehicles Symposium (IV), pp. 901–906 (2015)Google Scholar
  20. 20.
    Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark for the evaluation of RGB-D SLAM systems. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 573–580 (2012)Google Scholar
  21. 21.
    Szeliski, R.: Computer Vision: Algorithms and Applications. Springer, London (2010).  https://doi.org/10.1007/978-1-84882-935-0
  22. 22.
    Tavakkoli, A., Ambardekar, A., Nicolescu, M., Louis, S.: A genetic approach to training support vector data descriptors for background modeling in video data. In: International Symposium on Visual Computing, pp. 318–327 (2007)Google Scholar
  23. 23.
    Torr, P.H., Fitzgibbon, A.W.: Invariant fitting of two view geometry. IEEE Trans. Pattern Anal. Mach. Intell. 26(5), 648–650 (2004)CrossRefGoogle Scholar
  24. 24.
    Zhang, J., Singh, S.: LOAM: lidar odometry and mapping in real-time. In: Robotics: Science and Systems, vol. 2, p. 9 (2014)Google Scholar
  25. 25.
    Zhang, J., Singh, S.: Visual-lidar odometry and mapping: low-drift, robust, and fast. In: IEEE International Conference on Robotics and Automation, pp. 2174–2181 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.University of NevadaRenoUSA

Personalised recommendations