Skip to main content

Low-Cost LiDAR-Based Vehicle Detection for Self-driving Container Trucks at Seaport

  • Conference paper
  • First Online:
Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom 2021)

Abstract

The self-driving technology has been developed rapidly in the past decades, due to new sensors, and car manufacturers have become more open. However, fully self-driving vehicles for the public still has a long way to go. Most studies try to focus on self-driving in special scenes, such as park sightseeing car, express logistics vehicle, sweeper, indoor service robot, and special vehicles in the mining area or seaport area. One of the critical issues is that the cost of a self-driving vehicle should strictly be controlled for commercial uses. This paper presents a low-cost LiDAR-based moving obstacle detection and tracking for self-driving container trucks in the low-speed seaport area. We build a CNN model for obstacle detection with the bird’s eye view (BEV) map generated from two low density LiDARs equipped at the head of a container truck. A boosting tracker is used to achieve real-time processing speed on the embedded module of Tx2. Simulation on the collected data shows that our Strided-Yolo model can achieve the highest mAP on the BEV projection map than other models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bai, M., Mattyus, G., Homayounfar, N., Wang, S., Lakshmikanth, S.K., Urtasun, R.: Deep multi-sensor lane detection. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3102–3109. IEEE (2018)

    Google Scholar 

  2. Caesar, H., et al.: nuScenes: a multimodal dataset for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11621–11631 (2020)

    Google Scholar 

  3. Chang, M.F., et al.: Argoverse: 3D tracking and forecasting with rich maps. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8748–8757 (2019)

    Google Scholar 

  4. Chen, X., Ma, H., Wan, J., Li, B., Xia, T.: Multi-view 3D object detection network for autonomous driving. In: Computer Vision and Pattern Recognition, pp. 6526–6534. IEEE (2017)

    Google Scholar 

  5. Dong, X., Niu, J., Cui, J., Fu, Z., Ouyang, Z.: Fast segmentation-based object tracking model for autonomous vehicles. In: Qiu, M. (ed.) ICA3PP 2020. LNCS, vol. 12453, pp. 259–273. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-60239-0_18

    Chapter  Google Scholar 

  6. Gao, J., Yan, W., Yin, S., Tian, D., Xing, L.: Research on the applicability of automated driving vehicle on the expressway system. Technical report, SAE Technical Paper (2020)

    Google Scholar 

  7. Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)

    Article  Google Scholar 

  8. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361. IEEE (2012)

    Google Scholar 

  9. Geyer, J., et al.: A2D2: Audi autonomous driving dataset. arXiv preprint arXiv:2004.06320 (2020)

  10. Grabner, H., Grabner, M., Bischof, H.: Real-time tracking via on-line boosting. In: Proceedings of the British Machine Vision Conference (BMVC), vol. 1, pp. 1409–1422 (2006)

    Google Scholar 

  11. Le, T., Duan, Y.: PointGrid: a deep network for 3D shape understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9204–9214 (2018)

    Google Scholar 

  12. Li, B.: 3D fully convolutional network for vehicle detection in point cloud. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1513–1518. IEEE (2017)

    Google Scholar 

  13. Li, X., Guivant, J.E., Kwok, N., Xu, Y.: 3D backbone network for 3D object detection. arXiv:1901.08373 (2019)

  14. Li, Y., Niu, J., Ouyang, Z.: Fusion strategy of multi-sensor based object detection for self-driving vehicles. In: 2020 International Wireless Communications and Mobile Computing (IWCMC), pp. 1549–1554. IEEE (2020)

    Google Scholar 

  15. Sun, P., et al.: Scalability in Perception for Autonomous Driving: Waymo Open Dataset. arXiv:1912.04838 (2019)

  16. Mokhtar, B., Azab, M., Fathalla, E., Ghourab, E.M., Magdy, M., Eltoweissy, M.: Reliable collaborative semi-infrastructure vehicle-to-vehicle communication for local file sharing. In: Wang, X., Gao, H., Iqbal, M., Min, G. (eds.) CollaborateCom 2019. LNICST, vol. 292, pp. 698–711. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30146-0_47

    Chapter  Google Scholar 

  17. Ouyang, Z., et al.: A cGANs-based scene reconstruction model using lidar point cloud. In: IEEE International Symposium on Parallel and Distributed Processing with Applications. IEEE (2017)

    Google Scholar 

  18. Ouyang, Z., Cui, J., Dong, X., Li, Y., Niu, J.: SaccadeFork: a lightweight multi-sensor fusion-based target detector. Inf. Fusion 1, 1 (2021)

    Google Scholar 

  19. Ouyang, Z., Niu, J., Liu, Y., Guizani, M.: Deep CNN-based real-time traffic light detector for self-driving vehicles. IEEE Trans. Mob. Comput. 19(2), 300–313 (2019)

    Article  Google Scholar 

  20. Ouyang, Z., Wang, C., Liu, Yu., Niu, J.: Multiview CNN model for sensor fusion based vehicle detection. In: Hong, R., Cheng, W.-H., Yamasaki, T., Wang, M., Ngo, C.-W. (eds.) PCM 2018. LNCS, vol. 11166, pp. 459–470. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00764-5_42

    Chapter  Google Scholar 

  21. Premebida, C., Garrote, L., Asvadi, A., Ribeiro, A.P., Nunes, U.: High-resolution lidar-based depth mapping using bilateral filter. In: 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), pp. 2469–2474. IEEE (2016)

    Google Scholar 

  22. Qi, C.R., Su, H., Nießner, M., Dai, A., Yan, M., Guibas, L.J.: Volumetric and multi-view CNNs for object classification on 3D data. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5648–5656 (2016)

    Google Scholar 

  23. Qin, K., Wang, B., Zhang, H., Ma, W., Yan, M., Wang, X.: Research on application and testing of autonomous driving in ports. Technical report, SAE Technical Paper (2020)

    Google Scholar 

  24. Rastegari, M., Ordonez, V., Redmon, J., Farhadi, A.: XNOR-Net: ImageNet classification using binary convolutional neural networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 525–542. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_32

    Chapter  Google Scholar 

  25. Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition. IEEE (2017)

    Google Scholar 

  26. Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement. In: arXiv:1804.02767 (2018)

  27. Shi, S., et al.: PV-RCNN: point-voxel feature set abstraction for 3D object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10529–10538 (2020)

    Google Scholar 

  28. Simon, M., Milz, S., Amende, K., Gross, H.-M.: Complex-YOLO: an Euler-Region-Proposal for real-time 3D object detection on point clouds. In: Leal-Taixé, L., Roth, S. (eds.) ECCV 2018. LNCS, vol. 11129, pp. 197–209. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11009-3_11

    Chapter  Google Scholar 

  29. Singh, P., Verma, V.K., Rai, P., Namboodiri, V.P.: HetConv: heterogeneous kernel-based convolutions for deep CNNs. In: Computer Vision and Pattern Recognition (CVPR) 2019. IEEE (2019)

    Google Scholar 

  30. Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: the all convolutional net. arXiv preprint (2014)

    Google Scholar 

  31. Tang, X., Geng, Z., Chen, W.: Safety message propagation using vehicle-infrastructure cooperation in urban vehicular networks. In: Gao, H., Wang, X., Yin, Y., Iqbal, M. (eds.) CollaborateCom 2018. LNICST, vol. 268, pp. 235–251. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-12981-1_17

    Chapter  Google Scholar 

  32. Verghese, S.: Self-driving cars and lidar. In: CLEO: Applications and Technology, pp. AM3A-1. Optical Society of America (2017)

    Google Scholar 

  33. Wang, C., Ji, M., Wang, J., Wen, W., Li, T., Sun, Y.: An improved DBSCAN method for LiDAR data segmentation with automatic Eps estimation. Sensors 19(1), 172 (2019)

    Article  Google Scholar 

  34. Wang, P.S., Liu, Y., Guo, Y.X., Sun, C.Y., Tong, X.: O-CNN: octree-based convolutional neural networks for 3D shape analysis. ACM Trans. Graph. (TOG) 36(4), 1–11 (2017)

    Article  Google Scholar 

  35. Wang, X., Zhang, M., Meng, X., Xia, H., Wu, C., Luo, W.: Development conception and promotion strategy of bus system of the future. In: Proceedings of the 2020 5th International Conference on Cloud Computing and Internet of Things, pp. 63–68 (2020)

    Google Scholar 

  36. Wu, B., Li, P., Chen, J., Li, Y., Fan, Y.: 3D environment detection using multi-view color images and lidar point clouds. In: 2018 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW), pp. 1–2. IEEE (2018)

    Google Scholar 

  37. Yang, B., Luo, W., Urtasun, R.: PIXOR: real-time 3D object detection from point clouds. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7652–7660 (2018)

    Google Scholar 

  38. Yi, C., et al.: Urban building reconstruction from raw LiDAR point data. Comput. Aided Des. 93, 1–14 (2017)

    Article  Google Scholar 

  39. Zhao, J., Zhang, X.N., Gao, H., Yin, J., Zhou, M., Tan, C.: Object detection based on hierarchical multi-view proposal network for autonomous driving. In: 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–7. IEEE (2018)

    Google Scholar 

  40. Zhou, Y., Tuzel, O.: VoxelNet: end-to-end learning for point cloud based 3D object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4490–4499 (2018)

    Google Scholar 

Download references

Acknowledgment

This work has been supported by China Postdoctoral Science Foundation (2020M681798), Qianjiang Excellent Post-Doctoral Program (2020Y4A001) and 2020 Zhejiang Postdoctoral Research Project (ZJ2020011). JITRI Suzhou Automotive Research Institute Project (CEC20190404). Chongqing Autonomous Unmanned System Development Foundation and Key Technology Strategic Research Project (2020-XZ-CQ-3). The authors would like to thank Plusgo for their cooperation during data collection.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhenchao Ouyang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, C., Ouyang, Z., Ren, L., Liu, Y. (2021). Low-Cost LiDAR-Based Vehicle Detection for Self-driving Container Trucks at Seaport. In: Gao, H., Wang, X. (eds) Collaborative Computing: Networking, Applications and Worksharing. CollaborateCom 2021. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 407. Springer, Cham. https://doi.org/10.1007/978-3-030-92638-0_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-92638-0_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-92637-3

  • Online ISBN: 978-3-030-92638-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics