Abstract
Lane marks regulate the routes and define the prior drivable areas for a vehicle. Robust detection of lanes plays a vital role in intelligent vehicle navigation. Lane detection algorithms are usually composed of two steps: lane candidate generation and lane curve fitting. The latter is not only used for fitting lane mark candidates with concise curve forms but also for removal of the outliers produced in the former step. Therefore, lane curve fitting is crucial for lane detection. In this step, a common way is carrying out the curve fitting on the bird’s-eye view (BEV), which can mitigate the distortion caused by the perspective projection and improve the fitting results. However, due to the sloping road surfaces in real scenarios, the relative pose between the camera and the ground can change frequently, where using a fixed pre-calibrated projection matrix could bring extra errors in curve fitting. In this paper, we propose a homography prediction network named HP-Net for robust lane mark fitting under various sloping roads. The network can adaptively predict the homographic projection matrix for each input image, producing a suitable BEV for lane fitting. Considering the parallel nature of multiple lanes, the HP-Net could skillfully be trained by reusing the lane labels originally for the task of lane mark segmentation, without introducing any extra manpower. Our method has been verified on a large dataset CULane and another dataset acquired by ourselves. Experiment results show that the proposed model can effectively improve the robustness and accuracy of lane detection.
Similar content being viewed by others
Availability of data and material
All data and materials support their published claims and comply with field standards.
Code availability
Custom code.
References
Chiu, K.Y., Lin, S.F.: Lane detection using color-based segmentation. In: IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, pp. 706–711 (2005)
Lee, C., Moon, J.H.: Robust lane detection and tracking for real-time applications. IEEE Trans. Intell. Transp. Syst. 19(12), 4043–4048 (2018)
Liu, G., WÃűrgÃűtter, F., MarkeliÄĞ, I.: Combining statistical hough transform and particle filter for robust lane detection and tracking. In: 2010 IEEE Intelligent Vehicles Symposium (IV), San Diego, CA, USA, pp. 993−997 (2010)
Borkar, A., Hayes, M., Smith, M.T.: Robust lane detection and tracking with ransac and kalman filter. In: IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, pp. 3261–3264 (2009)
Zhang, F., He, F.: DRCDN: learning deep residual convolutional dehazing networks. Vis. Comput. 36(9), 1797–1808 (2020)
Yang, S., Chen, H., Xu, F. et al: High-performance UAVs visual tracking based on siamese network. Vis. Comput. (2021).
Das, D.K., Shit, S., Ray, D.N. et al: CGAN: closure-guided attention network for salient object detection. Vis. Comput. (2021).
Kim, J., Lee, M.: Robust lane detection based on convolutional neural network and random sample consensus. In: International Conference on Neural Information Processing (ICONIP), Kuching, Malaysia, pp. 454–461 (2014)
Li, J., Mei, X., Prokhorov, D., et al.: Deep neural network for structural prediction and lane detection in traffic scene. IEEE Trans Neural Netw Learn Syst (TNNLS) 28(3), 690–703 (2017)
Lee, S., Kim, J., Shin, Y.J., et al.: Vpgnet: Vanishing point guided network for lane and road marking detection and recognition. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, pp. 1947–1955(2017)
Pan, X., Shi, J., Luo, P., et al.: Spatial as deep: Spatial cnn for traffic scene understanding. In: Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, Louisiana, USA, pp. 7276–7283 (2018)
Loose, H., Franke, U., Stiller, C.: Kalman particle filter for lane recognition on rural roads. IEEE Intell. Vehicles Symp. (IV), pp. 60–65 (2009)
Aly, M.: Real time detection of lane markers in urban streets. IEEE Intell. Vehicles Symp. (IV), pp. 7–12 (2008)
Gackstatter, C., Heinemann, P., Thomas, S., et al.: Stable road lane model based on clothoids. Adv. Microsyst. Auto. Appl., pp. 133–143 (2010)
Ammu, M.K., Philomina, S.: Review of lane detection and tracking algorithms in advanced driver assistance system. Int J Comput Sci Inf Technol 7(4), 65–78 (2015)
Ding, D., Lee, C., Lee, K.Y.: An adaptive road roi determination algorithm for lane detection. In: IEEE International Conference of IEEE Region 10 (TENCON 2013), pp. 1–4 (2013)
Chanawangsa, P., Chen, C.W.: A new color-based lane detection via gaussian radial basis function networks. In: International Conference on Connected Vehicles and Expo (ICCVE), Beijing, China, pp. 166–171(2012)
Srivastava, S., Lumb, M., Singal, R.: Improved lane detection using hybrid median filter and modified hough transform. J Adv Res Comput Sci Softw Eng 4(1), 30–37 (2014)
Neven, D., De, B.B., Georgoulis, S., et al.: Towards end-to-end lane detection: an instance segmentation approach. IEEE Intell Vehicles Symp (IV), Changshu, China, pp. 286–291 (2018)
Hsu, Y.C., Xu, Z., Kira, Z., et al.: Learning to cluster for proposal-free instance segmentation. In: International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, pp. 1–8 (2018)
Qu, G., Zhang, W., Wang, Z., et al.: Stripnet: Towards topology consistent strip structure segmentation. In: ACM Multimedia Conference, 283–291 (2018)
Li, X., Li, J., Hu, X., et al.: Line-cnn: End-to-end traffic line detection with line proposal unit. IEEE Trans. Intell. Transp. Syst. 21(1), 248–258 (2020)
Qin, Z., Wang, H., Li, X.: Ultra fast structure-aware deep lane detection. Eur. Conf. Comput. Vis. (ECCV), pp. 276–291 (2020)
Jongin, Son, Hunjae, et al.: Real-time illumination invariant lane detection for lane departure warning system. Exp. Syst. Appl., Elsevier, 42(4), 1816–1824(2015)
Wang, Y., Teoh, E.K., Shen, D.: Lane detection and tracking using b-snake. Image and Vision Computing, Elsevier 22(4), 269–280 (2004)
Tabelini, L., Berriel, R., PaixÃčo, M.: Polylanenet: Lane estimation via deep polynomial regression. In: International Conference of Pattern recognition (ICPR), pp. 1–7 (2020)
Liu, R., Yuan, Z., Liu, T., et al.: End-to-end lane shape prediction with transformers. Int Workshop Appl Comput Vis (WACV), pp. 3694–3702 (2021)
He, B., Ai, R., Yan, Y., et al.: Accurate and robust lane detection based on dual-view convolutional neutral network. IEEE Intell Vehicles Symp (IV), Gothenburg, Sweden, pp. 1041–1046 (2016)
Bruls, T., Porav, H., Kunze, L., et al.: The right (angled) perspective: Improving the understanding of road scenes using boosted inverse perspective mapping. In: IEEE Intelligent Vehicles Symposium (IV), Paris, France, pp. 302–309 (2019)
Jaderberg, M., Simonyan, K., Zisserman, A., et al.: Spatial transformer networks. In: International Conference on Neural Information Processing Systems (NIPS), pp. 2017–2025 (2015)
Abadi, M., Agarwal, A., Barham, P.: Tensorflow: Large scale machine learning on heterogeneous distributed systems. arXiv preprint: arXiv:1603.04467(2016).
Hou, Y., Ma, Z., Liu, C., et al.: Learning lightweight lane detection cnns by self attention distillation. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seoul, Korea, pp. 1013–1021 (2019)
Yoo, S., Lee, H., Myeong, H., et al.: End-to-end lane marker detection via row-wise classification. In: CVPR Workshops (2020)
Ko, Y., Jun, J., Ko, D., et al.: Key points estimation and point instance segmentation approach for lane detection. arXiv preprint: arXiv:2002.06604(2020)
Xu, H., Wang, S., Cai, X., et al.: Curvelane-nas: Unifying lane-sensitive architecture search and adaptive point blending. In: European Conference on Computer Vision (ECCV), Glasgow, UK, pp. 1–16 (2020).
Tabelini, L., Berriel, R., Paixo, T.M., et al: Keep your eyes on the lane: real-time attention-guided lane detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, pp. 1–9 (2021).
Su, J., Chen, C., Zhang, K., et al: Structure Guided Lane Detection. Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), Montreal, Canada, pp. 1–8 (2021)
Qu, Z., Jin, H., Zhou, Y., et al: Focus on Local: Detecting Lane Marker from Bottom Up via Key Point. arXiv preprint: arXiv:2105.13680(2021)
Liu, L., Chen, X., Zhu, S., et al: CondLaneNet: a Top-to-down lane detection framework based on conditional convolution. arXiv preprint: arXiv:2105.05003(2021)
Acknowledgments
The work is supported by NSFC-Zhejiang Joint Fund for the Integration of Industrialization and Informatization under grant No.U1709214 and Key Research & Development Plan of Zhejiang Province (2021C01196).
Funding
The work is supported by NSFC-Zhejiang Joint Fund for the Integration of Industrialization and Informatization under grant no. U1709214 and Key Research & Development Plan of Zhejiang Province (2021C01196).
Author information
Authors and Affiliations
Contributions
Yiman Chen and Zhiyu Xiang contributed to the study conception and design. Data collection and material preparation were conducted by Yiman Chen and Wentao Du. Experiment implementation and result analysis were performed by Yiman Chen. The manuscript was written by Yiman Chen and all authors approved the version to be submitted.
Corresponding author
Ethics declarations
Conflicts of interest
Not applicable.
Ethics approval
This work is original and our paper has not been submitted to any other journals. The results are presented clearly, honestly and without fabrication, falsification, or inappropriate data manipulation (including image-based manipulation). We adhere to discipline-specific rules for acquiring, selecting, and processing data.
Consent to participate
Not applicable.
Consent for publication
All authors agreed with the content and gave explicit consent to publish the paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Chen, Y., Xiang, Z. & Du, W. Improving lane detection with adaptive homography prediction. Vis Comput 39, 581–595 (2023). https://doi.org/10.1007/s00371-021-02358-1
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-021-02358-1