Skip to main content

Temporal Up-Sampling of LIDAR Measurements Based on a Mono Camera

  • Conference paper
  • First Online:
Image Analysis and Processing – ICIAP 2022 (ICIAP 2022)

Abstract

Most of the 3D LIDAR sensors used in autonomous driving have significantly lower frame rates than modern cameras equipped to the same vehicle. This paper proposes a solution to virtually increase the frame rate of the LIDARs utilizing a mono camera, making possible the monitoring of dynamic objects with fast movement in the environment. First, dynamic object candidates are detected and tracked in the camera frames. Next, LIDAR points corresponding to these objects are identified. Then, virtual camera poses can be calculated by back projecting these points to the camera and tracking them. Finally, from the virtual camera poses, the object movement (transformation matrix transforming the object between frames) can be calculated (knowing the real camera poses) to the time moment, which does not have a corresponding LIDAR measurement. Static objects (rigid with the scene) can also be transformed to this time movement if the real camera poses are known. The proposed method has been tested in the Argoverse dataset, and it has outperformed earlier methods with a similar purpose.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://velodynelidar.com.

References

  1. Benedek, C., Majdik, A., Nagy, B., Rozsa, Z., Sziranyi, T.: Positioning and perception in LIDAR point clouds. Digit. Sig. Process. 119, 103193 (2021)

    Article  Google Scholar 

  2. Chang, M.F., et al.: Argoverse: 3D tracking and forecasting with rich maps. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019 (2019)

    Google Scholar 

  3. Debeunne, C., Vivet, D.: A review of visual-LiDAR fusion based simultaneous localization and mapping. Sensors 20(7), 2068 (2020)

    Article  Google Scholar 

  4. Fan, H., Yang, Y.: PointRNN: point recurrent neural network for moving point cloud processing. arXiv arXiv:1910.08287 (2019)

  5. Gao, X.S., Hou, X.R., Tang, J., Cheng, H.F.: Complete solution classification for the perspective-three-point problem. IEEE Trans. Pattern Anal. Mach. Intell. 25(8), 930–943 (2003)

    Article  Google Scholar 

  6. He, L., Jin, Z., Gao, Z.: De-skewing lidar scan for refinement of local mapping. Sensors 20, 1846 (2020)

    Article  Google Scholar 

  7. Hu, M., Wang, S., Li, B., Ning, S., Fan, L., Gong, X.: Towards precise and efficient image guided depth completion (2021)

    Google Scholar 

  8. Kalal, Z., Mikolajczyk, K., Matas, J.: Forward-backward error: automatic detection of tracking failures. In: 20th International Conference on Pattern Recognition, pp. 2756–2759 (2010)

    Google Scholar 

  9. Li, Y., Bu, R., Sun, M., Wu, W., Di, X., Chen, B.: PointCNN: convolution on X-transformed points. In: Advances in Neural Information Processing Systems, vol. 31. Curran Associates, Inc. (2018)

    Google Scholar 

  10. Liu, H., Liao, K., Lin, C., Zhao, Y., Guo, Y.: Pseudo-LiDAR point cloud interpolation based on 3D motion representation and spatial supervision. IEEE Trans. Intell. Transp. Syst., 1–11 (2021)

    Google Scholar 

  11. Liu, H., Liao, K., Zhao, Y., Liu, M.: PLIN: a network for pseudo-LiDAR point cloud interpolation. Sensors 20, 1573 (2020)

    Article  Google Scholar 

  12. Miller, M.L., Stone, H.S., Cox, I.J., Cox, I.J.: Optimizing Murty’s ranked assignment method. IEEE Trans. Aerosp. Electron. Syst. 33, 851–862 (1997)

    Article  Google Scholar 

  13. Mur-Artal, R., Tardós, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo and RGB-D cameras. IEEE Trans. Rob. 33(5), 1255–1262 (2017)

    Article  Google Scholar 

  14. Premebida, C., Garrote, L., Asvadi, A., Ribeiro, A., Nunes, U.: High-resolution LIDAR-based depth mapping using bilateral filter, November 2016, pp. 2469–2474 (2016)

    Google Scholar 

  15. Qi, C., Yi, L., Su, H., Guibas, L.: PointNet++: deep hierarchical feature learning on point sets in a metric space. In: NIPS (2017)

    Google Scholar 

  16. Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517–6525 (2017)

    Google Scholar 

  17. Rozsa, Z., Sziranyi, T.: Object detection from a few LIDAR scanning planes. IEEE Trans. Intell. Veh. 4(4), 548–560 (2019)

    Article  Google Scholar 

  18. Rusu, R.B.: Semantic 3D object maps for everyday manipulation in human living environments. KI - Künstliche Intelligenz 24(4), 345–348 (2010)

    Article  Google Scholar 

  19. Schneider, N., Schneider, L., Pinggera, P., Franke, U., Pollefeys, M., Stiller, C.: Semantically guided depth upsampling. In: Rosenhahn, B., Andres, B. (eds.) GCPR 2016. LNCS, vol. 9796, pp. 37–48. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-45886-1_4

    Chapter  Google Scholar 

  20. Torr, P.H.S., Zisserman, A.: MLESAC: a new robust estimator with application to estimating image geometry. Comput. Vis. Image Underst. 78, 138–156 (2000)

    Article  Google Scholar 

  21. Uhrig, J., Schneider, N., Schneider, L., Franke, U., Brox, T., Geiger, A.: Sparsity invariant CNNs. In: International Conference on 3D Vision (3DV) (2017)

    Google Scholar 

  22. Wang, P., Huang, X., Cheng, X., Zhou, D., Geng, Q., Yang, R.: The ApolloScape open dataset for autonomous driving and its application. IEEE Trans. Pattern Anal. Mach. Intell. 42, 2702–2719 (2019)

    Google Scholar 

  23. Wencan, C., Ko, J.H.: Segmentation of points in the future: Joint segmentation and prediction of a point cloud. IEEE Access 9, 52977–52986 (2021)

    Article  Google Scholar 

  24. Weng, X., Wang, J., Levine, S., Kitani, K., Rhinehart, N.: Inverting the Pose Forecasting Pipeline with SPF2: Sequential Pointcloud Forecasting for Sequential Pose Forecasting. CoRL (2020)

    Google Scholar 

  25. Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000)

    Article  Google Scholar 

  26. Zhao, S., Gong, M., Fu, H., Tao, D.: Adaptive context-aware multi-modal network for depth completion. IEEE Trans. Image Process. 30, 5264–5276 (2021)

    Article  Google Scholar 

  27. Zhou, L., Li, Z., Kaess, M.: Automatic extrinsic calibration of a camera and a 3D LiDAR using line and plane correspondences. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5562–5569 (2018)

    Google Scholar 

Download references

Acknowledgements

The research was supported by the Ministry of Innovation and Technology NRDI Office within the framework of the Autonomous Systems National Laboratory Program and by the Hungarian National Science Fundation (NKFIH OTKA) No. K139485.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zoltan Rozsa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rozsa, Z., Sziranyi, T. (2022). Temporal Up-Sampling of LIDAR Measurements Based on a Mono Camera. In: Sclaroff, S., Distante, C., Leo, M., Farinella, G.M., Tombari, F. (eds) Image Analysis and Processing – ICIAP 2022. ICIAP 2022. Lecture Notes in Computer Science, vol 13232. Springer, Cham. https://doi.org/10.1007/978-3-031-06430-2_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-06430-2_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-06429-6

  • Online ISBN: 978-3-031-06430-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics